Stereotypes in artificial intelligence-generated content: Impact on content choice.

IF 2.1 3区 心理学 Q2 PSYCHOLOGY, APPLIED
Fei Gao, Lan Xia, Wenting Zhong
{"title":"Stereotypes in artificial intelligence-generated content: Impact on content choice.","authors":"Fei Gao, Lan Xia, Wenting Zhong","doi":"10.1037/xap0000548","DOIUrl":null,"url":null,"abstract":"<p><p>Generative artificial intelligence is reshaping content creation, shifting from human-generated content to artificial intelligence (AI)-generated content from which we choose. A growing concern is the propagation of stereotypes in AI-generated content. Through a preregistered large-scale field study in 2024, tasking ChatGPT, Midjourney, and Canva with generating 1,110 images for multiple scenarios, we find that AI systematically replicates and potentially amplifies sex and racial stereotypes by generating a significantly larger proportion of stereotypical content in a choice set. Five preregistered experiments in 2024 and 2025 (<i>N</i> = 2,994, U.S. adults) further demonstrate that this surplus of stereotypical content increases the likelihood of people choosing it, driven by both its availability and existing stereotypes in people's minds. When AI offers a larger proportion of content aligned with existing stereotypes, it makes such choices more fluent. Conversely, reducing the availability of AI-generated stereotypical content in choice sets decreases individuals' stereotypical beliefs and choices. We further find that increasing awareness of stereotypes in AI-generated content does not prompt self-correction when people are exposed to stereotypes perceived relatively harmless (e.g., women-nurse). Instead, it increases the likelihood of choosing stereotypical content. However, people self-correct when exposed to AI-generated stereotypes perceived as harmful (e.g., Black people-criminal). (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":48003,"journal":{"name":"Journal of Experimental Psychology-Applied","volume":" ","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental Psychology-Applied","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/xap0000548","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

Generative artificial intelligence is reshaping content creation, shifting from human-generated content to artificial intelligence (AI)-generated content from which we choose. A growing concern is the propagation of stereotypes in AI-generated content. Through a preregistered large-scale field study in 2024, tasking ChatGPT, Midjourney, and Canva with generating 1,110 images for multiple scenarios, we find that AI systematically replicates and potentially amplifies sex and racial stereotypes by generating a significantly larger proportion of stereotypical content in a choice set. Five preregistered experiments in 2024 and 2025 (N = 2,994, U.S. adults) further demonstrate that this surplus of stereotypical content increases the likelihood of people choosing it, driven by both its availability and existing stereotypes in people's minds. When AI offers a larger proportion of content aligned with existing stereotypes, it makes such choices more fluent. Conversely, reducing the availability of AI-generated stereotypical content in choice sets decreases individuals' stereotypical beliefs and choices. We further find that increasing awareness of stereotypes in AI-generated content does not prompt self-correction when people are exposed to stereotypes perceived relatively harmless (e.g., women-nurse). Instead, it increases the likelihood of choosing stereotypical content. However, people self-correct when exposed to AI-generated stereotypes perceived as harmful (e.g., Black people-criminal). (PsycInfo Database Record (c) 2025 APA, all rights reserved).

人工智能生成内容中的刻板印象:对内容选择的影响。
生成式人工智能正在重塑内容创作,从人类生成的内容转向我们选择的人工智能(AI)生成的内容。越来越令人担忧的是,在人工智能生成的内容中,刻板印象的传播。通过2024年预先注册的大规模实地研究,我们使用ChatGPT、Midjourney和Canva为多个场景生成1110张图像,我们发现人工智能通过在选择集中生成更大比例的刻板印象内容,系统地复制并潜在地放大了性别和种族刻板印象。在2024年和2025年进行的五项预先注册的实验(N = 2994名美国成年人)进一步证明,这种刻板印象内容的过剩增加了人们选择它的可能性,这是由它的可用性和人们心中已有的刻板印象所驱动的。当AI提供更大比例与现有刻板印象一致的内容时,它会使这种选择更加流畅。相反,减少ai生成的刻板印象内容在选择集中的可用性会减少个体的刻板印象信念和选择。我们进一步发现,当人们接触到被认为相对无害的刻板印象(例如,女性护士)时,增加对人工智能生成内容中刻板印象的认识并不会促使他们自我纠正。相反,它增加了选择刻板内容的可能性。然而,当人们接触到人工智能产生的被认为有害的刻板印象时,他们会自我纠正(例如,黑人是罪犯)。(PsycInfo Database Record (c) 2025 APA,版权所有)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.90
自引率
3.80%
发文量
110
期刊介绍: The mission of the Journal of Experimental Psychology: Applied® is to publish original empirical investigations in experimental psychology that bridge practically oriented problems and psychological theory. The journal also publishes research aimed at developing and testing of models of cognitive processing or behavior in applied situations, including laboratory and field settings. Occasionally, review articles are considered for publication if they contribute significantly to important topics within applied experimental psychology. Areas of interest include applications of perception, attention, memory, decision making, reasoning, information processing, problem solving, learning, and skill acquisition.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信