News overview
Press releases

Perpetuating Prejudice: AI reinforces Racial Stereotypes in Global Health Imagery

Urging the Global Health Community to Critically Assess AI-Generated Images in Media
Koen Peeters AI imagery

Researchers from the Institute of Tropical Medicine (ITM) in Antwerp and the University of Oxford recently conducted a study that shed light on the limitations of artificial intelligence (AI) in combating racist stereotypes prevalent in global health communication. The result is a wake-up call for AI developers and global health actors, emphasising the need for more accurate datasets and a more nuanced approach to AI-generated images. The research has been published in The Lancet Global Health.

The researchers tasked the AI tool "Midjourney Bot (5.1)" with generating images of black African doctors providing care to white children, aiming to challenge stereotypes. Surprisingly, despite the potential generative power of AI, the bot consistently failed to produce the desired images. In over 300 attempts, the patients always had dark instead of light skin. Other prompts resulted in images featuring exaggerated and culturally offensive African elements. These included giraffes, elephants, and caricatured versions of African clothing. Shockingly, some depictions even portrayed traditional African healers as white men in exotic clothing. Moreover, AI consistently associated HIV patients receiving care with people of darker skin tones.

The research aimed to utilise AI to transform the stereotypical images frequently seen in global health communication, where Africans are often portrayed as dependent on the so-called "white saviour" trope. However, the results had the opposite effect, maintaining prejudices and failing to convey a sense of modernity.

A picture says more than a thousand words

This poses a significant issue, given AI's widespread integration into public knowledge generation. Global health actors have also started using AI to create images for their reports and promotional materials. These images can perpetuate power imbalances, reinforce stereotypes and undermine communities by misrepresenting them. 

Koen Peeters, an anthropologist at ITM, highlights the importance of recognising global health images as political agents. He emphasises that racism, sexism, and coloniality are embedded social processes that manifest in everyday scenarios, including AI.

Remix of existing images

To understand the root of AI's shortcomings in generating inclusive images, it's essential to acknowledge that AI programmes learn from the content they absorb. Training AI models on datasets of representative images can help bridge the gap between AI-generated content and reality. This emphasises the need for AI developers to be diligent in curating data that is free from biases and stereotypes.

“There still remains a lack of attention to many of the ethical issue involved in using AI in global health. These images show us that there is still a lot of work to be done,” says prof. Patricia Kingori and Arsenii Alenichev from the University of Oxford.

The study is a call to action for the global health community, urging them to critically assess the use of AI-generated images in their media and promotional materials. It highlights that such images cannot be viewed in isolation.

Alenichev, A., Kingori, P., & Grietens, K. P. (2023). Reflections Before the Storm: The AI reproduction of biased imagery in global health visuals. The Lancet Global Health, 11(10), e1496–e1498. https://doi.org/10.1016/s2214-109x(23)00329-7

Spread the word! Share this story on

More stories