Representation of Demographics in Otolaryngology by Artificial Intelligence Text-to-Image Platforms

IF 1.6 4区 医学 Q2 OTORHINOLARYNGOLOGY
Ariana L. Shaari, Anthony M. Saad, Aman M. Patel, Andrey Filimonov
{"title":"Representation of Demographics in Otolaryngology by Artificial Intelligence Text-to-Image Platforms","authors":"Ariana L. Shaari,&nbsp;Anthony M. Saad,&nbsp;Aman M. Patel,&nbsp;Andrey Filimonov","doi":"10.1002/lio2.70152","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Objective</h3>\n \n <p>Artificial intelligence (AI) text-to-image generators have a propensity to reflect stereotypes. This study investigates the perception of race and gender of AI-generated portraits of otolaryngologists, evaluating their accuracy against workforce demographics and whether they amplify existing social biases.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Three text-to-image platforms (DALL-E3, Runway, Midjourney) were prompted to generate portrait photos of otolaryngologists based on 29 categories, including personality traits, fellowship, and academic rank. 580 portrait photos were made per platform. Two reviewers characterized the gender and race of the 1740 portraits. Statistical analysis compared the demographics of AI outputs to existing demographic information.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Of the 1740 AI-generated otolaryngologists generated, 88% of images were labeled as White, 4% Black, 6% Asian, 2% Indeterminate/Other race, 88% male, and 12% female. Across academic rank, the representation of White individuals was 97% (department chairs), 90% (program directors), 93% (professors), and 78% (residents). Male representation ranged from 90% (department chairs), 75% (program directors), 100% (professors), and 87% (residents). Runway produced more images of male (89% vs. 88% vs. 85%, <i>p</i> = 0.043) and White (92% vs. 88% vs. 80%, <i>p</i> &lt; 0.001) otolaryngologists than DALL-E3 and Midjourney, respectively.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>Text-to-image platforms demonstrated racial and gender biases, with notable differences compared to actual demographics. These platforms often underrepresented females and racial minority groups and overrepresented White males. These disparities underscore the need for the awareness of biases in AI, especially as these tools become more integrated into patient-facing platforms. Left unchecked, these biases risk marginalizing minority populations and reinforcing societal stereotypes.</p>\n </section>\n \n <section>\n \n <h3> Level of Evidence</h3>\n \n <p>4.</p>\n </section>\n </div>","PeriodicalId":48529,"journal":{"name":"Laryngoscope Investigative Otolaryngology","volume":"10 3","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/lio2.70152","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Laryngoscope Investigative Otolaryngology","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/lio2.70152","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OTORHINOLARYNGOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

Artificial intelligence (AI) text-to-image generators have a propensity to reflect stereotypes. This study investigates the perception of race and gender of AI-generated portraits of otolaryngologists, evaluating their accuracy against workforce demographics and whether they amplify existing social biases.

Methods

Three text-to-image platforms (DALL-E3, Runway, Midjourney) were prompted to generate portrait photos of otolaryngologists based on 29 categories, including personality traits, fellowship, and academic rank. 580 portrait photos were made per platform. Two reviewers characterized the gender and race of the 1740 portraits. Statistical analysis compared the demographics of AI outputs to existing demographic information.

Results

Of the 1740 AI-generated otolaryngologists generated, 88% of images were labeled as White, 4% Black, 6% Asian, 2% Indeterminate/Other race, 88% male, and 12% female. Across academic rank, the representation of White individuals was 97% (department chairs), 90% (program directors), 93% (professors), and 78% (residents). Male representation ranged from 90% (department chairs), 75% (program directors), 100% (professors), and 87% (residents). Runway produced more images of male (89% vs. 88% vs. 85%, p = 0.043) and White (92% vs. 88% vs. 80%, p < 0.001) otolaryngologists than DALL-E3 and Midjourney, respectively.

Conclusion

Text-to-image platforms demonstrated racial and gender biases, with notable differences compared to actual demographics. These platforms often underrepresented females and racial minority groups and overrepresented White males. These disparities underscore the need for the awareness of biases in AI, especially as these tools become more integrated into patient-facing platforms. Left unchecked, these biases risk marginalizing minority populations and reinforcing societal stereotypes.

Level of Evidence

4.

人工智能文本-图像平台在耳鼻喉科的人口统计学表征
客观人工智能(AI)文本到图像生成器倾向于反映刻板印象。本研究调查了人工智能生成的耳鼻喉科医生肖像对种族和性别的看法,评估了它们与劳动力人口统计数据的准确性,以及它们是否放大了现有的社会偏见。方法采用DALL-E3、Runway、Midjourney三个文本图像转换平台,根据人格特征、奖学金、学术等级等29个类别,生成耳鼻喉科医生的肖像照片。每个平台拍摄了580张人像照片。两位评论家对1740年肖像画的性别和种族进行了描述。统计分析将人工智能输出的人口统计信息与现有的人口统计信息进行比较。在人工智能生成的1740名耳鼻喉科医生中,88%的图像被标记为白人,4%为黑人,6%为亚洲人,2%为不确定/其他种族,88%为男性,12%为女性。在学术排名中,白人的比例分别为97%(系主任)、90%(项目主任)、93%(教授)和78%(住院医师)。男性比例从90%(系主任)、75%(项目主任)、100%(教授)和87%(住院医师)不等。与DALL-E3和Midjourney相比,Runway产生了更多的男性耳鼻喉科医生的图像(89%比88%比85%,p = 0.043)和White(92%比88%比80%,p < 0.001)。结论:文字转图像平台表现出种族和性别偏见,与实际人口统计数据相比存在显著差异。这些平台往往没有充分代表女性和少数族裔群体,而白人男性的代表过多。这些差异强调了对人工智能偏见的认识的必要性,特别是当这些工具越来越多地集成到面向患者的平台中时。如果不加以控制,这些偏见可能会使少数群体边缘化,并强化社会陈规定型观念。证据级别4。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.00
自引率
0.00%
发文量
245
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信