与大型语言模型(llm)的偏见互动降低了受歧视群体成员之间的可信度和行为意图

IF 9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Zachary W. Petzel, Leanne Sowerby
{"title":"与大型语言模型(llm)的偏见互动降低了受歧视群体成员之间的可信度和行为意图","authors":"Zachary W. Petzel,&nbsp;Leanne Sowerby","doi":"10.1016/j.chb.2025.108563","DOIUrl":null,"url":null,"abstract":"<div><div>Users report prejudiced responses generated by large language models (LLMs) like ChatGPT. Across 3 preregistered experiments, members of stigmatized social groups (Black Americans, women) reported higher trustworthiness of LLMs after viewing unbiased interactions with ChatGPT compared to when viewing AI-generated prejudice (i.e., racial or gender disparities in salary). Notably, higher trustworthiness accounted for increased behavioral intentions to use LLMs, but only among stigmatized social groups. Conversely, White Americans were more likely to use LLMs when AI-generated prejudice confirmed implicit racial biases, while men intended to use LLMs when responses matched implicit gender biases. Results suggest reducing AI-generated prejudice may promote trustworthiness of LLMs among members of stigmatized social groups, increasing their intentions to use AI tools. Importantly, addressing AI-generated prejudice could minimize social disparities in adoption of LLMs which might further exacerbate professional and educational disparities. Given expected integration of AI in professional and educational settings, these findings may guide equitable implementation strategies among employees and students, in addition to extending theoretical models of technology acceptance by suggesting additional mechanisms of behavioral intentions to use emerging technologies (e.g., trustworthiness).</div></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"165 ","pages":"Article 108563"},"PeriodicalIF":9.0000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prejudiced interactions with large language models (LLMs) reduce trustworthiness and behavioral intentions among members of stigmatized groups\",\"authors\":\"Zachary W. Petzel,&nbsp;Leanne Sowerby\",\"doi\":\"10.1016/j.chb.2025.108563\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Users report prejudiced responses generated by large language models (LLMs) like ChatGPT. Across 3 preregistered experiments, members of stigmatized social groups (Black Americans, women) reported higher trustworthiness of LLMs after viewing unbiased interactions with ChatGPT compared to when viewing AI-generated prejudice (i.e., racial or gender disparities in salary). Notably, higher trustworthiness accounted for increased behavioral intentions to use LLMs, but only among stigmatized social groups. Conversely, White Americans were more likely to use LLMs when AI-generated prejudice confirmed implicit racial biases, while men intended to use LLMs when responses matched implicit gender biases. Results suggest reducing AI-generated prejudice may promote trustworthiness of LLMs among members of stigmatized social groups, increasing their intentions to use AI tools. Importantly, addressing AI-generated prejudice could minimize social disparities in adoption of LLMs which might further exacerbate professional and educational disparities. Given expected integration of AI in professional and educational settings, these findings may guide equitable implementation strategies among employees and students, in addition to extending theoretical models of technology acceptance by suggesting additional mechanisms of behavioral intentions to use emerging technologies (e.g., trustworthiness).</div></div>\",\"PeriodicalId\":48471,\"journal\":{\"name\":\"Computers in Human Behavior\",\"volume\":\"165 \",\"pages\":\"Article 108563\"},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2025-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S074756322500010X\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S074756322500010X","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

用户报告由像ChatGPT这样的大型语言模型(llm)产生的有偏见的反应。在3个预先注册的实验中,被污名化的社会群体(美国黑人、女性)的成员在观看了与ChatGPT的无偏见互动后,与观看人工智能产生的偏见(即工资上的种族或性别差异)相比,报告称法学硕士的可信度更高。值得注意的是,较高的可信度导致使用法学硕士的行为意图增加,但仅在被污名化的社会群体中。相反,当人工智能产生的偏见证实了隐性种族偏见时,美国白人更有可能使用法学硕士,而当回答与隐性性别偏见相匹配时,男性则倾向于使用法学硕士。研究结果表明,减少人工智能产生的偏见可能会提高被污名化的社会群体成员对法学硕士的信任度,增加他们使用人工智能工具的意愿。重要的是,解决人工智能产生的偏见可以最大限度地减少采用法学硕士的社会差距,这可能进一步加剧专业和教育差距。考虑到人工智能在专业和教育环境中的预期整合,这些发现可以指导员工和学生之间的公平实施策略,此外还可以通过提出使用新兴技术的行为意图的其他机制(例如,可信度)来扩展技术接受的理论模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Prejudiced interactions with large language models (LLMs) reduce trustworthiness and behavioral intentions among members of stigmatized groups
Users report prejudiced responses generated by large language models (LLMs) like ChatGPT. Across 3 preregistered experiments, members of stigmatized social groups (Black Americans, women) reported higher trustworthiness of LLMs after viewing unbiased interactions with ChatGPT compared to when viewing AI-generated prejudice (i.e., racial or gender disparities in salary). Notably, higher trustworthiness accounted for increased behavioral intentions to use LLMs, but only among stigmatized social groups. Conversely, White Americans were more likely to use LLMs when AI-generated prejudice confirmed implicit racial biases, while men intended to use LLMs when responses matched implicit gender biases. Results suggest reducing AI-generated prejudice may promote trustworthiness of LLMs among members of stigmatized social groups, increasing their intentions to use AI tools. Importantly, addressing AI-generated prejudice could minimize social disparities in adoption of LLMs which might further exacerbate professional and educational disparities. Given expected integration of AI in professional and educational settings, these findings may guide equitable implementation strategies among employees and students, in addition to extending theoretical models of technology acceptance by suggesting additional mechanisms of behavioral intentions to use emerging technologies (e.g., trustworthiness).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信