谁是科学家?b谷歌视觉人工智能中的性别和种族偏见

Ehsan Mohammadi, Yizhou Cai, Alamir Novin, Valerie Vera, Ehsan Soltanmohammadi
{"title":"谁是科学家?b谷歌视觉人工智能中的性别和种族偏见","authors":"Ehsan Mohammadi,&nbsp;Yizhou Cai,&nbsp;Alamir Novin,&nbsp;Valerie Vera,&nbsp;Ehsan Soltanmohammadi","doi":"10.1007/s43681-025-00742-4","DOIUrl":null,"url":null,"abstract":"<div><p>With the prevalence of artificial intelligence (AI) in everyday life, there is a need to study the biases of AI. Specifically, understanding the biases of AI in computer vision is important due to visual content's role in creating classes and categories that can shape people’s perspectives. Without supervision, such classifications can lead to gradual and intangible negative impacts of AI discrimination in the real world. Demographics at the intersection of gender and racial biases may experience unforeseen multiplier effects due to how AI compounds big data without accounting for implicit biases. To quantitatively verify this multiplier effect of biases, this study first examines the gender and racial biases in Google Cloud Vision AI, a leading application with a high level of adoption and usage in different sectors worldwide. Statistical analysis of 1600 diverse images of scientists reveals that Google Cloud Vision AI has implicit gender and racial biases in identifying scientists in image processing. Particularly, the findings show that, in this sample, Black and Hispanic individuals were represented less compared to White and Asian individuals as scientists. Google Cloud Vision AI also significantly underrepresented women as scientists compared to men. Finally, the results indicate that biases at the <i>intersection</i> of race and gender are exponentially worse, with women of color being least represented in images of scientists by Google Vision. Given the ubiquity and impact of AI applications, addressing the complexity of social issues such as equitable integration and algorithmic fairness is essential to maintaining public trust in AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4993 - 5010"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00742-4.pdf","citationCount":"0","resultStr":"{\"title\":\"Who is a scientist? Gender and racial biases in google vision AI\",\"authors\":\"Ehsan Mohammadi,&nbsp;Yizhou Cai,&nbsp;Alamir Novin,&nbsp;Valerie Vera,&nbsp;Ehsan Soltanmohammadi\",\"doi\":\"10.1007/s43681-025-00742-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>With the prevalence of artificial intelligence (AI) in everyday life, there is a need to study the biases of AI. Specifically, understanding the biases of AI in computer vision is important due to visual content's role in creating classes and categories that can shape people’s perspectives. Without supervision, such classifications can lead to gradual and intangible negative impacts of AI discrimination in the real world. Demographics at the intersection of gender and racial biases may experience unforeseen multiplier effects due to how AI compounds big data without accounting for implicit biases. To quantitatively verify this multiplier effect of biases, this study first examines the gender and racial biases in Google Cloud Vision AI, a leading application with a high level of adoption and usage in different sectors worldwide. Statistical analysis of 1600 diverse images of scientists reveals that Google Cloud Vision AI has implicit gender and racial biases in identifying scientists in image processing. Particularly, the findings show that, in this sample, Black and Hispanic individuals were represented less compared to White and Asian individuals as scientists. Google Cloud Vision AI also significantly underrepresented women as scientists compared to men. Finally, the results indicate that biases at the <i>intersection</i> of race and gender are exponentially worse, with women of color being least represented in images of scientists by Google Vision. Given the ubiquity and impact of AI applications, addressing the complexity of social issues such as equitable integration and algorithmic fairness is essential to maintaining public trust in AI.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"5 5\",\"pages\":\"4993 - 5010\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s43681-025-00742-4.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-025-00742-4\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00742-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着人工智能在日常生活中的普及,有必要研究人工智能的偏见。具体来说,理解人工智能在计算机视觉中的偏见是很重要的,因为视觉内容在创建类和类别方面的作用可以塑造人们的观点。在没有监督的情况下,这种分类可能会导致人工智能歧视在现实世界中逐渐产生无形的负面影响。由于人工智能如何在不考虑隐性偏见的情况下合成大数据,性别和种族偏见交叉点的人口统计学可能会经历不可预见的乘数效应。为了定量验证这种偏见的乘数效应,本研究首先检查了b谷歌云视觉人工智能中的性别和种族偏见,b谷歌云视觉人工智能是一种领先的应用程序,在全球不同领域都有很高的采用和使用水平。对1600张不同科学家图像的统计分析表明,谷歌云视觉人工智能在图像处理中识别科学家时存在隐性的性别和种族偏见。特别是,研究结果表明,在这个样本中,与白人和亚洲人相比,黑人和西班牙裔人作为科学家的比例更低。与男性相比,云视觉人工智能的女性科学家人数也明显不足。最后,结果表明,种族和性别交叉的偏见呈指数级恶化,有色人种女性在谷歌Vision的科学家图像中所占比例最低。鉴于人工智能应用的普遍性和影响,解决公平整合和算法公平等社会问题的复杂性对于维持公众对人工智能的信任至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Who is a scientist? Gender and racial biases in google vision AI

With the prevalence of artificial intelligence (AI) in everyday life, there is a need to study the biases of AI. Specifically, understanding the biases of AI in computer vision is important due to visual content's role in creating classes and categories that can shape people’s perspectives. Without supervision, such classifications can lead to gradual and intangible negative impacts of AI discrimination in the real world. Demographics at the intersection of gender and racial biases may experience unforeseen multiplier effects due to how AI compounds big data without accounting for implicit biases. To quantitatively verify this multiplier effect of biases, this study first examines the gender and racial biases in Google Cloud Vision AI, a leading application with a high level of adoption and usage in different sectors worldwide. Statistical analysis of 1600 diverse images of scientists reveals that Google Cloud Vision AI has implicit gender and racial biases in identifying scientists in image processing. Particularly, the findings show that, in this sample, Black and Hispanic individuals were represented less compared to White and Asian individuals as scientists. Google Cloud Vision AI also significantly underrepresented women as scientists compared to men. Finally, the results indicate that biases at the intersection of race and gender are exponentially worse, with women of color being least represented in images of scientists by Google Vision. Given the ubiquity and impact of AI applications, addressing the complexity of social issues such as equitable integration and algorithmic fairness is essential to maintaining public trust in AI.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信