Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Addressing Bias In Text-To-Image Generation Within Healthcare Through Norm-Critical Perspectives
University West, School of Business, Economics and IT, Divison of Informatics.ORCID iD: 0000-0001-9094-4125
University West, Department of Health Sciences, Section for health promotion and care sciences. (LOVHH)ORCID iD: /0000-0002-9024-5110
2024 (English)In: ICERI2024 Proceedings, iated Digital Library , 2024, Vol. 1, p. 10233-10237Conference paper, Published paper (Refereed)
Abstract [en]

The development of AI services, including text-to-image models like DALL-E and Midjourney, has grown in recent years, enabling the creation of images from textual descriptions. These models are used across industries, such as entertainment and advertising, but there are also risks associated with using these text-to-image generative models, including discrimination, misuse, and the spread of misinformation. Discrimination involves cultural, racial, and gender biases; misuse includes privacy violations and harmful content; and misinformation risks destabilizing society by generating misleading or harmful content. While AI is being explored in education, there is a lack of research on how text-to-image models can challenge biases in healthcare students. This is critical since healthcare professionals' conscious or unconscious norms, values, and attitudes have been identified as partial explanations for inequality in healthcare. Text-to-image models has a great potential to increase the emphasis on norm awareness by creating images that challenge the students’ norms. Norm criticism is an approach that identify and challenge what is generally accepted as "normal" in society, enabling students to identify various norms that may cause prejudice, discrimination, and marginalization, thereby developing their norm awareness. To achieve this, it is essential to integrate diversity and inclusion principles when using AI to ensure that we don’t maintain social biases.

In this pilot study, we have chosen to limit the analysis to the representation of nurses and patients in the generated images, focusing on age, gender, and race/ethnicity. A total of 200 images were generated using Midjourney. The initial 80 images were produced using a simple prompt, “a nurse and a patient”. Additional images were created with prompts specifically designed to create images with a higher degree of inclusiveness and diversity. Prompt number two was: “A nurse and a patient, take equity, inclusion, and diversity into consideration”, number three was “A nurse and a patient, do not be stereotypical when creating the image” and the fourth prompt was “A nurse and a patient, adapt a norm critical perspective when creating the image”.

When using the first prompt the preliminary result shows a clear tendency towards most of the nurses being represented by a young white woman. In only three of the images was the nurse depicted as a person of non-white ethnicity. The patient in the images varied more regarding age, gender and skin color, but was represented mostly by white women as well. When using the second prompt the majority of the nurses as well as the patient was represented by women of different ages and of various skin color, but white persons were in minority and there were only a few men represented. The third prompt generated images of only white nurses and patients of different ages. Five of the nurses were male but all of the patients were women. The fourth prompt generated more variation regarding age and gender regarding the nurses as well as the patients. But there were only two of the images that presented a nurse that maybe was of non-white ethnicity.

In conclusion AI models are often trained on datasets that reflect existing societal biases. Norm criticism can help identify whether these images reinforce traditional stereotypes, such as depicting nurses primarily as women or portraying patients in ways that align with racial or gendered stereotypes.

Place, publisher, year, edition, pages
iated Digital Library , 2024. Vol. 1, p. 10233-10237
Keywords [en]
Technology, Education, AI, Equality
National Category
Nursing
Research subject
NURSING AND PUBLIC HEALTH SCIENCE, Nursing science
Identifiers
URN: urn:nbn:se:hv:diva-22887DOI: 10.21125/iceri.2024.2603ISBN: 978-84-09-63010-3 (print)OAI: oai:DiVA.org:hv-22887DiVA, id: diva2:1927199
Conference
17th annual International Conference of Education, Research and Innovation, Seville, Spain. 11-13 November, 2024
Available from: 2025-01-14 Created: 2025-01-14 Last updated: 2025-09-30

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Master Östlund, ChristianArveklev Höglund, Susanna

Search in DiVA

By author/editor
Master Östlund, ChristianArveklev Höglund, Susanna
By organisation
Divison of InformaticsSection for health promotion and care sciences
Nursing

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 88 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf