{"title":"Who is a scientist? Gender and racial biases in google vision AI","authors":"Ehsan Mohammadi, Yizhou Cai, Alamir Novin, Valerie Vera, Ehsan Soltanmohammadi","doi":"10.1007/s43681-025-00742-4","DOIUrl":null,"url":null,"abstract":"<div><p>With the prevalence of artificial intelligence (AI) in everyday life, there is a need to study the biases of AI. Specifically, understanding the biases of AI in computer vision is important due to visual content's role in creating classes and categories that can shape people’s perspectives. Without supervision, such classifications can lead to gradual and intangible negative impacts of AI discrimination in the real world. Demographics at the intersection of gender and racial biases may experience unforeseen multiplier effects due to how AI compounds big data without accounting for implicit biases. To quantitatively verify this multiplier effect of biases, this study first examines the gender and racial biases in Google Cloud Vision AI, a leading application with a high level of adoption and usage in different sectors worldwide. Statistical analysis of 1600 diverse images of scientists reveals that Google Cloud Vision AI has implicit gender and racial biases in identifying scientists in image processing. Particularly, the findings show that, in this sample, Black and Hispanic individuals were represented less compared to White and Asian individuals as scientists. Google Cloud Vision AI also significantly underrepresented women as scientists compared to men. Finally, the results indicate that biases at the <i>intersection</i> of race and gender are exponentially worse, with women of color being least represented in images of scientists by Google Vision. Given the ubiquity and impact of AI applications, addressing the complexity of social issues such as equitable integration and algorithmic fairness is essential to maintaining public trust in AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4993 - 5010"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00742-4.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00742-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the prevalence of artificial intelligence (AI) in everyday life, there is a need to study the biases of AI. Specifically, understanding the biases of AI in computer vision is important due to visual content's role in creating classes and categories that can shape people’s perspectives. Without supervision, such classifications can lead to gradual and intangible negative impacts of AI discrimination in the real world. Demographics at the intersection of gender and racial biases may experience unforeseen multiplier effects due to how AI compounds big data without accounting for implicit biases. To quantitatively verify this multiplier effect of biases, this study first examines the gender and racial biases in Google Cloud Vision AI, a leading application with a high level of adoption and usage in different sectors worldwide. Statistical analysis of 1600 diverse images of scientists reveals that Google Cloud Vision AI has implicit gender and racial biases in identifying scientists in image processing. Particularly, the findings show that, in this sample, Black and Hispanic individuals were represented less compared to White and Asian individuals as scientists. Google Cloud Vision AI also significantly underrepresented women as scientists compared to men. Finally, the results indicate that biases at the intersection of race and gender are exponentially worse, with women of color being least represented in images of scientists by Google Vision. Given the ubiquity and impact of AI applications, addressing the complexity of social issues such as equitable integration and algorithmic fairness is essential to maintaining public trust in AI.