{"title":"Gender bias in text-to-image generative artificial intelligence depiction of Australian paramedics and first responders.","authors":"Geoffrey Currie, Johnathan Hewis, Phillip Ebbs","doi":"10.1016/j.auec.2024.11.003","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>In Australia, almost 50 % of paramedics are female yet they remain under-represented in stereotypical depictions of the profession. The potentially transformative value of generative artificial intelligence (AI) may be limited by stereotypical errors, misrepresentations and bias. Increasing use of text-to-image generative AI, like DALL-E 3, could reinforce gender and ethnicity biases and, therefore, is important to objectively evaluate.</p><p><strong>Method: </strong>In March 2024, DALL-E 3 was utilised via GPT-4 to generate a series of individual and group images of Australian paramedics, ambulance officers, police officers and firefighters. In total, 82 images were produced including 60 individual-character images, and 22 multiple-character group images. All 326 depicted characters were independently analysed by three reviewers for apparent gender, age, skin tone and ethnicity.</p><p><strong>Results: </strong>Among first responders, 90.8 % (N = 296) were depicted as male, 90.5 % (N = 295) as Caucasian, 95.7 % (N = 312) as a light skin tone, and 94.8 % (N = 309) as under 55 years of age. For paramedics and police the gender distribution was a statistically significant variation from that of actual Australian workforce data (all p < 0.001). Among the images of individual paramedics and ambulance officers (N = 32), DALL-E 3 depicted 100 % as male, 100 % as Caucasian and 100 % with light skin tone.</p><p><strong>Conclusion: </strong>Gender and ethnicity bias is a significant limitation for text-to-image generative AI using DALL-E 3 among Australian first responders. Generated images have a disproportionately high misrepresentation of males, Caucasians and light skin tones that are not representative of the diversity of paramedics in Australia today.</p>","PeriodicalId":55979,"journal":{"name":"Australasian Emergency Care","volume":" ","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Australasian Emergency Care","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.auec.2024.11.003","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EMERGENCY MEDICINE","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: In Australia, almost 50 % of paramedics are female yet they remain under-represented in stereotypical depictions of the profession. The potentially transformative value of generative artificial intelligence (AI) may be limited by stereotypical errors, misrepresentations and bias. Increasing use of text-to-image generative AI, like DALL-E 3, could reinforce gender and ethnicity biases and, therefore, is important to objectively evaluate.
Method: In March 2024, DALL-E 3 was utilised via GPT-4 to generate a series of individual and group images of Australian paramedics, ambulance officers, police officers and firefighters. In total, 82 images were produced including 60 individual-character images, and 22 multiple-character group images. All 326 depicted characters were independently analysed by three reviewers for apparent gender, age, skin tone and ethnicity.
Results: Among first responders, 90.8 % (N = 296) were depicted as male, 90.5 % (N = 295) as Caucasian, 95.7 % (N = 312) as a light skin tone, and 94.8 % (N = 309) as under 55 years of age. For paramedics and police the gender distribution was a statistically significant variation from that of actual Australian workforce data (all p < 0.001). Among the images of individual paramedics and ambulance officers (N = 32), DALL-E 3 depicted 100 % as male, 100 % as Caucasian and 100 % with light skin tone.
Conclusion: Gender and ethnicity bias is a significant limitation for text-to-image generative AI using DALL-E 3 among Australian first responders. Generated images have a disproportionately high misrepresentation of males, Caucasians and light skin tones that are not representative of the diversity of paramedics in Australia today.
期刊介绍:
Australasian Emergency Care is an international peer-reviewed journal dedicated to supporting emergency nurses, physicians, paramedics and other professionals in advancing the science and practice of emergency care, wherever it is delivered. As the official journal of the College of Emergency Nursing Australasia (CENA), Australasian Emergency Care is a conduit for clinical, applied, and theoretical research and knowledge that advances the science and practice of emergency care in original, innovative and challenging ways. The journal serves as a leading voice for the emergency care community, reflecting its inter-professional diversity, and the importance of collaboration and shared decision-making to achieve quality patient outcomes. It is strongly focussed on advancing the patient experience and quality of care across the emergency care continuum, spanning the pre-hospital, hospital and post-hospital settings within Australasia and beyond.