Stereotypical bias amplification and reversal in an experimental model of human interaction with generative artificial intelligence.

IF 2.9 3区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES
Royal Society Open Science Pub Date : 2025-04-09 eCollection Date: 2025-04-01 DOI:10.1098/rsos.241472
Kevin Allan, Jacobo Azcona, Somayajulu Sripada, Georgios Leontidis, Clare A M Sutherland, Louise H Phillips, Douglas Martin
{"title":"Stereotypical bias amplification and reversal in an experimental model of human interaction with generative artificial intelligence.","authors":"Kevin Allan, Jacobo Azcona, Somayajulu Sripada, Georgios Leontidis, Clare A M Sutherland, Louise H Phillips, Douglas Martin","doi":"10.1098/rsos.241472","DOIUrl":null,"url":null,"abstract":"<p><p>Stereotypical biases are readily acquired and expressed by generative artificial intelligence (AI), causing growing societal concern about these systems amplifying existing human bias. This concern rests on reasonable psychological assumptions, but stereotypical bias amplification during human-AI interaction relative to pre-existing baseline levels has not been demonstrated. Here, we use previous psychological work on gendered character traits to capture and control gender stereotypes expressed in character descriptions generated by Open AI's GPT3.5. In four experiments (<i>N</i> = 782) with a first impressions task, we find that unexplained ('black-box') character recommendations using stereotypical traits already convey a potent persuasive influence significantly amplifying baseline stereotyping within first impressions. Recommendations that are counter-stereotypical eliminate and effectively reverse human baseline bias, but these stereotype-challenging influences propagate less well than reinforcing influences from stereotypical recommendations. Critically, the bias amplification and reversal phenomena occur when GPT3.5 elaborates on the core stereotypical content, although GPT3.5's explanations propagate counter-stereotypical influence more effectively and persuasively than black-box recommendations. Our findings strongly imply that without robust safeguards, generative AI will amplify existing bias. But with safeguards, existing bias can be eliminated and even reversed. Our novel approach safely allows such effects to be studied in various contexts where gender and other bias-inducing social stereotypes operate.</p>","PeriodicalId":21525,"journal":{"name":"Royal Society Open Science","volume":"12 4","pages":"241472"},"PeriodicalIF":2.9000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11979296/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Royal Society Open Science","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1098/rsos.241472","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Stereotypical biases are readily acquired and expressed by generative artificial intelligence (AI), causing growing societal concern about these systems amplifying existing human bias. This concern rests on reasonable psychological assumptions, but stereotypical bias amplification during human-AI interaction relative to pre-existing baseline levels has not been demonstrated. Here, we use previous psychological work on gendered character traits to capture and control gender stereotypes expressed in character descriptions generated by Open AI's GPT3.5. In four experiments (N = 782) with a first impressions task, we find that unexplained ('black-box') character recommendations using stereotypical traits already convey a potent persuasive influence significantly amplifying baseline stereotyping within first impressions. Recommendations that are counter-stereotypical eliminate and effectively reverse human baseline bias, but these stereotype-challenging influences propagate less well than reinforcing influences from stereotypical recommendations. Critically, the bias amplification and reversal phenomena occur when GPT3.5 elaborates on the core stereotypical content, although GPT3.5's explanations propagate counter-stereotypical influence more effectively and persuasively than black-box recommendations. Our findings strongly imply that without robust safeguards, generative AI will amplify existing bias. But with safeguards, existing bias can be eliminated and even reversed. Our novel approach safely allows such effects to be studied in various contexts where gender and other bias-inducing social stereotypes operate.

人类与生成式人工智能互动的实验模型中的刻板偏见放大和逆转。
陈规定型的偏见很容易被生成式人工智能(AI)获取和表达,这引起了社会对这些系统放大现有人类偏见的日益关注。这种担忧基于合理的心理假设,但在人类与人工智能交互过程中,相对于先前的基线水平,刻板偏见的放大尚未得到证实。在这里,我们使用之前关于性别角色特征的心理学研究来捕捉和控制Open AI的GPT3.5生成的角色描述中表达的性别刻板印象。在四个关于第一印象任务的实验(N = 782)中,我们发现使用刻板印象特征的无法解释的(“黑箱”)性格推荐已经传达了强有力的说服影响,显着放大了第一印象中的基线刻板印象。反刻板印象的建议消除并有效地扭转了人类的基线偏见,但这些挑战刻板印象的影响不如刻板印象的强化影响传播得好。关键的是,当GPT3.5阐述核心刻板印象内容时,会出现偏见放大和逆转现象,尽管GPT3.5的解释比黑盒建议更有效和更有说服力地传播反刻板印象的影响。我们的研究结果强烈暗示,如果没有强有力的保障措施,生成式人工智能将放大现有的偏见。但有了保障措施,现有的偏见可以消除甚至逆转。我们的新方法安全地允许在性别和其他引起偏见的社会刻板印象运作的各种背景下研究这种影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Royal Society Open Science
Royal Society Open Science Multidisciplinary-Multidisciplinary
CiteScore
6.00
自引率
0.00%
发文量
508
审稿时长
14 weeks
期刊介绍: Royal Society Open Science is a new open journal publishing high-quality original research across the entire range of science on the basis of objective peer-review. The journal covers the entire range of science and mathematics and will allow the Society to publish all the high-quality work it receives without the usual restrictions on scope, length or impact.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信