{"title":"Stereotypes in artificial intelligence-generated content: Impact on content choice.","authors":"Fei Gao, Lan Xia, Wenting Zhong","doi":"10.1037/xap0000548","DOIUrl":null,"url":null,"abstract":"<p><p>Generative artificial intelligence is reshaping content creation, shifting from human-generated content to artificial intelligence (AI)-generated content from which we choose. A growing concern is the propagation of stereotypes in AI-generated content. Through a preregistered large-scale field study in 2024, tasking ChatGPT, Midjourney, and Canva with generating 1,110 images for multiple scenarios, we find that AI systematically replicates and potentially amplifies sex and racial stereotypes by generating a significantly larger proportion of stereotypical content in a choice set. Five preregistered experiments in 2024 and 2025 (<i>N</i> = 2,994, U.S. adults) further demonstrate that this surplus of stereotypical content increases the likelihood of people choosing it, driven by both its availability and existing stereotypes in people's minds. When AI offers a larger proportion of content aligned with existing stereotypes, it makes such choices more fluent. Conversely, reducing the availability of AI-generated stereotypical content in choice sets decreases individuals' stereotypical beliefs and choices. We further find that increasing awareness of stereotypes in AI-generated content does not prompt self-correction when people are exposed to stereotypes perceived relatively harmless (e.g., women-nurse). Instead, it increases the likelihood of choosing stereotypical content. However, people self-correct when exposed to AI-generated stereotypes perceived as harmful (e.g., Black people-criminal). (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":48003,"journal":{"name":"Journal of Experimental Psychology-Applied","volume":" ","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental Psychology-Applied","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/xap0000548","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
Generative artificial intelligence is reshaping content creation, shifting from human-generated content to artificial intelligence (AI)-generated content from which we choose. A growing concern is the propagation of stereotypes in AI-generated content. Through a preregistered large-scale field study in 2024, tasking ChatGPT, Midjourney, and Canva with generating 1,110 images for multiple scenarios, we find that AI systematically replicates and potentially amplifies sex and racial stereotypes by generating a significantly larger proportion of stereotypical content in a choice set. Five preregistered experiments in 2024 and 2025 (N = 2,994, U.S. adults) further demonstrate that this surplus of stereotypical content increases the likelihood of people choosing it, driven by both its availability and existing stereotypes in people's minds. When AI offers a larger proportion of content aligned with existing stereotypes, it makes such choices more fluent. Conversely, reducing the availability of AI-generated stereotypical content in choice sets decreases individuals' stereotypical beliefs and choices. We further find that increasing awareness of stereotypes in AI-generated content does not prompt self-correction when people are exposed to stereotypes perceived relatively harmless (e.g., women-nurse). Instead, it increases the likelihood of choosing stereotypical content. However, people self-correct when exposed to AI-generated stereotypes perceived as harmful (e.g., Black people-criminal). (PsycInfo Database Record (c) 2025 APA, all rights reserved).
期刊介绍:
The mission of the Journal of Experimental Psychology: Applied® is to publish original empirical investigations in experimental psychology that bridge practically oriented problems and psychological theory. The journal also publishes research aimed at developing and testing of models of cognitive processing or behavior in applied situations, including laboratory and field settings. Occasionally, review articles are considered for publication if they contribute significantly to important topics within applied experimental psychology. Areas of interest include applications of perception, attention, memory, decision making, reasoning, information processing, problem solving, learning, and skill acquisition.