Hannah van Kolfschooten & Astrid Pilottin
In March 2024, the European Parliament formally adopted the European Union Artificial Intelligence (AI) Act (EU AI Act), establishing harmonized rules on AI systems placed on the European market. It is the first legal framework to regulate Generative AI (GenAI). Generative AI deploys machine learning techniques based on user input to create new content, such as text, music, or images. It uses large language models (LLMs) to simulate human conversations, often in the form of conversational chatbots. Lay people progressively use GenAI to generate all sorts of output—including health-related information. A specific type of GenAI, Image-generative AI, can generate health information through images. Indeed, health-themed AI-generated images increasingly appear in newspapers, social media posts, advertisements, educational materials, and academic journals.1, 2, 3, 4, 5
This development may, however, perpetuate and reinforce existing harmful stereotypes about patients and health care professionals. Overall, evidence of demographic stereotypes depicted in AI-generated images is growing.6 Notably, when used to generate textual health information, several empirical studies show how GenAI often produces stereotypes.7 For example, Zack et al8 recently revealed how the LLMs underlying GenAI systems often reproduce health-related stereotypes on race, ethnicity, and gender. Similarly, in this journal, Gravel et al9 reported the limited accuracy of GenAI medical chatbots for certain patient groups. As Image-generative AI tools such as ChatGPT-4, Dall-E, and Midjourney run on the same LLMs as the GenAI chatbots, similar harmful stereotypes may appear in health-themed imagery.
Against this background, we conducted a small-scale exploratory study to explore how GenAI systems can produce negative medical stereotypes in AI-generated images. Our findings inspired this commentary. Explicating the medical stereotypes in AI-generated images serves 2 important objectives. First, as their dissemination is growing, it is essential to increase awareness of the proneness of GenAI to integrate harmful stereotypes in health-themed images. The impact of stereotypical health-themed images can be particularly detrimental, as they can influence (1) patients’ behavior toward health care professionals and their decisions concerning accessing health care and sharing information, and (2) health care professionals’ behavior toward certain patient groups and the health outcomes of these groups.10 Second, visualizing biases in images is an effective manner to make people understand AI-produced biases in general, especially because most GenAI uses the same LLMs for text and images.
Cite as: Hannah van Kolfschooten & Astrid Pilottin, ‘Reinforcing Stereotypes in Health Care Through Artificial Intelligence–Generated Images: A Call for Regulation’, Mayo Clinic Proceedings: Digital Health, 2, 2024/3, pp. 335-341

