在社会交往中使用大型语言模型的不良反应。

IF 2.2 Q2 MULTIDISCIPLINARY SCIENCES
PNAS nexus Pub Date : 2025-04-07 eCollection Date: 2025-04-01 DOI:10.1093/pnasnexus/pgaf112
Fabian Dvorak, Regina Stumpf, Sebastian Fehrler, Urs Fischbacher
{"title":"在社会交往中使用大型语言模型的不良反应。","authors":"Fabian Dvorak, Regina Stumpf, Sebastian Fehrler, Urs Fischbacher","doi":"10.1093/pnasnexus/pgaf112","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) are poised to reshape the way individuals communicate and interact. While this form of AI has the potential to efficiently make many human decisions, there is limited understanding of how individuals will respond to its use in social interactions. In particular, it remains unclear how individuals interact with LLMs when the interaction has consequences for other people. Here, we report the results of a large-scale, preregistered online experiment ( <math><mi>n</mi> <mo>=</mo> <mspace></mspace> <mn>3</mn> <mo>,</mo> <mn>552</mn></math> ) showing that human players' fairness, trust, trustworthiness, cooperation, and coordination in economic two-player games decrease when the decision of the interaction partner is taken over by ChatGPT. On the contrary, we observe no adverse reactions when individuals are uncertain whether they are interacting with a human or a LLM. At the same time, participants often delegate decisions to the LLM, especially when the model's involvement is not disclosed, and individuals have difficulty distinguishing between decisions made by humans and those made by AI.</p>","PeriodicalId":74468,"journal":{"name":"PNAS nexus","volume":"4 4","pages":"pgaf112"},"PeriodicalIF":2.2000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11997303/pdf/","citationCount":"0","resultStr":"{\"title\":\"Adverse reactions to the use of large language models in social interactions.\",\"authors\":\"Fabian Dvorak, Regina Stumpf, Sebastian Fehrler, Urs Fischbacher\",\"doi\":\"10.1093/pnasnexus/pgaf112\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Large language models (LLMs) are poised to reshape the way individuals communicate and interact. While this form of AI has the potential to efficiently make many human decisions, there is limited understanding of how individuals will respond to its use in social interactions. In particular, it remains unclear how individuals interact with LLMs when the interaction has consequences for other people. Here, we report the results of a large-scale, preregistered online experiment ( <math><mi>n</mi> <mo>=</mo> <mspace></mspace> <mn>3</mn> <mo>,</mo> <mn>552</mn></math> ) showing that human players' fairness, trust, trustworthiness, cooperation, and coordination in economic two-player games decrease when the decision of the interaction partner is taken over by ChatGPT. On the contrary, we observe no adverse reactions when individuals are uncertain whether they are interacting with a human or a LLM. At the same time, participants often delegate decisions to the LLM, especially when the model's involvement is not disclosed, and individuals have difficulty distinguishing between decisions made by humans and those made by AI.</p>\",\"PeriodicalId\":74468,\"journal\":{\"name\":\"PNAS nexus\",\"volume\":\"4 4\",\"pages\":\"pgaf112\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11997303/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PNAS nexus\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/pnasnexus/pgaf112\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/4/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PNAS nexus","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/pnasnexus/pgaf112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)正准备重塑个人沟通和互动的方式。虽然这种形式的人工智能有可能有效地做出许多人类决策,但人们对个人如何在社交互动中对其做出反应的理解有限。特别是,当这种互动对其他人产生影响时,人们如何与法学硕士互动尚不清楚。在这里,我们报告了一项大规模的预注册在线实验的结果(n = 3,552),该实验表明,当交互伙伴的决策被ChatGPT接管时,人类玩家在经济双人游戏中的公平、信任、可信赖性、合作和协调性会下降。相反,当个体不确定他们是否与人类或法学硕士互动时,我们观察到没有不良反应。与此同时,参与者经常将决策委托给法学硕士,特别是当模型的参与没有公开时,个人很难区分人类做出的决策和人工智能做出的决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adverse reactions to the use of large language models in social interactions.

Large language models (LLMs) are poised to reshape the way individuals communicate and interact. While this form of AI has the potential to efficiently make many human decisions, there is limited understanding of how individuals will respond to its use in social interactions. In particular, it remains unclear how individuals interact with LLMs when the interaction has consequences for other people. Here, we report the results of a large-scale, preregistered online experiment ( n = 3 , 552 ) showing that human players' fairness, trust, trustworthiness, cooperation, and coordination in economic two-player games decrease when the decision of the interaction partner is taken over by ChatGPT. On the contrary, we observe no adverse reactions when individuals are uncertain whether they are interacting with a human or a LLM. At the same time, participants often delegate decisions to the LLM, especially when the model's involvement is not disclosed, and individuals have difficulty distinguishing between decisions made by humans and those made by AI.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信