Jess Smith, a former Australian Paralympic swimmer, discovered that ChatGPT’s AI image generator could finally create accurate images of people with disabilities like herself—something it couldn’t do just months earlier. Her experience highlights how AI systems are gradually improving their representation of disabled people, though significant gaps remain that reflect broader societal biases and exclusion.
What you should know: Smith’s initial attempts to generate an image of herself missing her left arm below the elbow resulted in AI creating images with two arms or prosthetic devices instead.
• When she asked ChatGPT why it struggled, the AI explained it lacked sufficient training data to work with.
• A recent attempt successfully generated an accurate image, prompting Smith to say: “Oh my goodness, it worked, it’s amazing it’s finally been updated.”
Why this matters: The shift represents progress beyond technical improvements, addressing fundamental questions about inclusion in AI development.
• “Representation in technology means being seen not as an afterthought, but as part of the world that’s being built,” Smith explains.
• For millions of people with disabilities, accurate AI representation affects how they’re perceived and included in digital spaces.
Ongoing challenges: Other users with disabilities continue facing similar problems with AI image generation.
• Naomi Bowman, who has sight in only one eye, found that ChatGPT altered her facial features when asked to blur photo backgrounds, even when she specifically explained her eye condition.
• “It now makes me sad as it shows the inherent bias within AI,” Bowman says, calling for more rigorous training to ensure fair representation.
The bigger picture: AI bias often mirrors societal blind spots, extending beyond disability representation.
• A 2019 US government study found facial recognition algorithms were far less accurate at identifying African-American and Asian faces compared to Caucasian faces.
• Abran Maldonado of Create Labs, a US-based company that builds culturally aware AI systems, emphasizes that diversity in AI requires cultural representation “at the creation stage” during data training and labeling.
What they’re saying: OpenAI, the company behind ChatGPT, acknowledged the improvements while recognizing ongoing work needed.
• “We know challenges remain, particularly around fair representation, and we’re actively working to improve this—including refining our post-training methods and adding more diverse examples to help reduce bias over time,” an OpenAI spokesperson said.
• Smith notes that conversations about disability often become “too awkward and uncomfortable so people back away,” highlighting the social barriers that compound technological ones.
Environmental concerns: Some experts criticize AI image generation’s energy consumption.
• Professor Gina Neff of Queen Mary University London told the BBC that ChatGPT is “burning through energy,” with data centers consuming more electricity annually than 117 countries.