信任法学硕士是什么感觉:信任心理学的权力下放?

IF 1.9 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Simon T. Powers;Neil Urquhart;Chloe M. Barnes;Theodor Cimpeanu;Anikó Ekárt;The Anh Han;Jeremy Pitt;Michael Guckert
{"title":"信任法学硕士是什么感觉:信任心理学的权力下放?","authors":"Simon T. Powers;Neil Urquhart;Chloe M. Barnes;Theodor Cimpeanu;Anikó Ekárt;The Anh Han;Jeremy Pitt;Michael Guckert","doi":"10.1109/MTS.2025.3583233","DOIUrl":null,"url":null,"abstract":"The advent of large language models (LLMs), their sudden popularity, and their extensive use by an unprepared and, therefore, unskilled public raises profound questions about the societal consequences that this might have on both the individual and collective levels. In particular, the benefits of a marginal increase in productivity are offset by the potential for widespread cognitive deskilling or nonskilling. While there has been much discussion about the trust relationship between humans and generative AI technologies, the long-term consequences that the use of generative AI can have on the human capability to make trust decisions in other contexts, including interpersonal relations, have not been considered. We analyze this development using the functionalist lens of a general trust model and deconstruct the potential loss of the human ability to make informed and reasoned trust decisions. From our observations and conclusions, we derive a first set of recommendations to increase the awareness of the underlying threats, laying the foundation for a more substantive analysis of the opportunities and threats of delegating educative, cognitive, and knowledge-centric tasks to unrestricted automation.","PeriodicalId":55016,"journal":{"name":"IEEE Technology and Society Magazine","volume":"44 3","pages":"30-37"},"PeriodicalIF":1.9000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11163569","citationCount":"0","resultStr":"{\"title\":\"What’s It Like to Trust an LLM: The Devolution of Trust Psychology?\",\"authors\":\"Simon T. Powers;Neil Urquhart;Chloe M. Barnes;Theodor Cimpeanu;Anikó Ekárt;The Anh Han;Jeremy Pitt;Michael Guckert\",\"doi\":\"10.1109/MTS.2025.3583233\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The advent of large language models (LLMs), their sudden popularity, and their extensive use by an unprepared and, therefore, unskilled public raises profound questions about the societal consequences that this might have on both the individual and collective levels. In particular, the benefits of a marginal increase in productivity are offset by the potential for widespread cognitive deskilling or nonskilling. While there has been much discussion about the trust relationship between humans and generative AI technologies, the long-term consequences that the use of generative AI can have on the human capability to make trust decisions in other contexts, including interpersonal relations, have not been considered. We analyze this development using the functionalist lens of a general trust model and deconstruct the potential loss of the human ability to make informed and reasoned trust decisions. From our observations and conclusions, we derive a first set of recommendations to increase the awareness of the underlying threats, laying the foundation for a more substantive analysis of the opportunities and threats of delegating educative, cognitive, and knowledge-centric tasks to unrestricted automation.\",\"PeriodicalId\":55016,\"journal\":{\"name\":\"IEEE Technology and Society Magazine\",\"volume\":\"44 3\",\"pages\":\"30-37\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2025-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11163569\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Technology and Society Magazine\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11163569/\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Technology and Society Magazine","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11163569/","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)的出现,它们的突然流行,以及它们被毫无准备的,因此,不熟练的公众广泛使用,提出了关于这可能在个人和集体层面上产生的社会后果的深刻问题。特别是,生产力边际增长的好处被广泛的认知技能或非技能的潜在影响所抵消。虽然关于人类与生成式人工智能技术之间的信任关系已经有了很多讨论,但使用生成式人工智能可能对人类在其他环境(包括人际关系)中做出信任决策的能力产生的长期影响尚未得到考虑。我们使用一般信任模型的功能主义镜头来分析这种发展,并解构人类做出明智和理性信任决策的能力的潜在损失。从我们的观察和结论中,我们得出了第一组建议,以提高对潜在威胁的认识,为将教育、认知和以知识为中心的任务委托给不受限制的自动化的机会和威胁进行更实质性的分析奠定了基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
What’s It Like to Trust an LLM: The Devolution of Trust Psychology?
The advent of large language models (LLMs), their sudden popularity, and their extensive use by an unprepared and, therefore, unskilled public raises profound questions about the societal consequences that this might have on both the individual and collective levels. In particular, the benefits of a marginal increase in productivity are offset by the potential for widespread cognitive deskilling or nonskilling. While there has been much discussion about the trust relationship between humans and generative AI technologies, the long-term consequences that the use of generative AI can have on the human capability to make trust decisions in other contexts, including interpersonal relations, have not been considered. We analyze this development using the functionalist lens of a general trust model and deconstruct the potential loss of the human ability to make informed and reasoned trust decisions. From our observations and conclusions, we derive a first set of recommendations to increase the awareness of the underlying threats, laying the foundation for a more substantive analysis of the opportunities and threats of delegating educative, cognitive, and knowledge-centric tasks to unrestricted automation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Technology and Society Magazine
IEEE Technology and Society Magazine 工程技术-工程:电子与电气
CiteScore
3.00
自引率
13.60%
发文量
72
审稿时长
>12 weeks
期刊介绍: IEEE Technology and Society Magazine invites feature articles (refereed), special articles, and commentaries on topics within the scope of the IEEE Society on Social Implications of Technology, in the broad areas of social implications of electrotechnology, history of electrotechnology, and engineering ethics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信