Bias in ai: here's the evidence but what's the solution?
- Elma Glasgow
- 5 days ago
- 4 min read
Updated: 3 days ago

Last weekend I wrote a 'ranty' LinkedIn post about bias in ai, and received a pretty big and instant response.
I posted on a Sunday evening while watching The Night Manager, assuming everyone else in similar time zones would be too busy doing other things to notice the update.
I expected a two or three comments from close connections at the most. But LinkedIn surprised me with a significant and swift reaction, which speaks to the concern out there about the topic.
So what's all the fuss about?
Here's what happened.
I had discovered Lovable via the brilliant product development social enterprise Kimolian Academy. I thought I'd experiment with it and ended up redesigning my homepage.
When I realised I couldn't easily transfer the design to my Wix website, I duplicated the new layout manually. I also decided to change my photo.
I uploaded a nice selfie taken in lovely Granchester, south of Cambridge, UK. I'd taken it after a morning of work and was lipgloss-less. So, I tried out the Wix ai feature to enhance my photo; I instructed it to add a light layer of pink-brown colour to my lips.
I entered a straightforward, accurate prompt in Wix's ai enhancement feature – I wasn't using the feature that creates a whole new image. And I didn't mention skin colour or other features.
It's key to mention that.
The output? A white woman. I'm not white. And she looks nothing like me.
The 'new me' also has more Caucasian hair – big coiffured, short curls (straight out of the '80s), rather than my tighter, loose-hanging curls.
I had only asked for colour on the lips, not a replacement of my skin colour, facial features, hair – and did I mention ai slimmed down my body too?
I was horrified at first. Then saw the funny side. It was ridiculous. But, how wrong could ai be? And how racist can it be? Very, it turns out.
The cause of biased ai
So, why did ai whitewash me?
Probably because it has been programmed to assume whiteness as the 'norm'. I've also noticed bias in ai-generated alt text on various platforms. The output never mentions skin colour. What's going on?
Basically, the people who've built ai are typically white and male. And bias is caused by failure to include diverse perspectives. As far as I understand it, this includes the use of bad quality or a low number of 'diverse' data sets.
I'm no expert so I recommend doing a quick search to find out more – you'll see hundreds of articles of the subject.
It's more serious than a photo...
Knowing how important this matter is, I took to LinkedIn.
The amount of interest surprised me. Most of the comments, reposts, etc occurred in the first couple of hours. And the engagement came from around the world.
People are very angry and frustrated. And it's not just about photos. It's far more serious that than.
The rapid development of ai can't be ignored. Especially when bias is so prevalent. And especially because ai is increasingly used in healthcare and other critical areas of life like criminal justice, finance and employment.
Here's an article about errors in mammogram results for Black women due to ai. It can cause false positives, putting women at risk of pointless biopsies, possibly loss of income and stress.
While looking for other instances of bias in ai in medicine, these quotes caught my eye:
Research has shown that programs trained on images taken from people with lighter skin types only might not be as accurate for people with darker skin, and vice versa. - Dr David Wen, Oxford University Hospitals NHS Foundation Trust
This means medical ai may not pick up on symptoms on dark skin tones, e.g, rashes, spots.
And I'm totally in support of the below statement...
Ai wasn’t built to work fairly for everyone
Equity wasn’t treated as a basic requirement in how systems were designed, trained, tested and governed. And we're now at a point where fixing this huge problem is far harder than it would've been at the start.
I already prepare questions before medical examinations to keep myself safe. I've witnessed biased attitudes and approaches in hospitals too many times to believe it's incidental. But ai could send such incidents sky-high, and so I'm becoming even more critical of medical decisions. Especially as I age.
Importantly, this isn't just about skin tone – it also impacts other minorised communities and our intersectionalities.
It was vanity that resulted in this article, but it's prompted me to focus on a more serious concern. And if we want our communities and society to be healthy, we should all be concerned.
And, readers, my photo remains lipgloss-less!
Need public-led solutions?
If you're involved in ai in medicine or healthcare, and would like to collaborate on an ethical community engagement initiative, please get in touch for a chat: hello@elmaglasgowconsulting.com.




Comments