Simon T. Powers;Neil Urquhart;Chloe M. Barnes;Theodor Cimpeanu;Anikó Ekárt;The Anh Han;Jeremy Pitt;Michael Guckert
{"title":"What’s It Like to Trust an LLM: The Devolution of Trust Psychology?","authors":"Simon T. Powers;Neil Urquhart;Chloe M. Barnes;Theodor Cimpeanu;Anikó Ekárt;The Anh Han;Jeremy Pitt;Michael Guckert","doi":"10.1109/MTS.2025.3583233","DOIUrl":null,"url":null,"abstract":"The advent of large language models (LLMs), their sudden popularity, and their extensive use by an unprepared and, therefore, unskilled public raises profound questions about the societal consequences that this might have on both the individual and collective levels. In particular, the benefits of a marginal increase in productivity are offset by the potential for widespread cognitive deskilling or nonskilling. While there has been much discussion about the trust relationship between humans and generative AI technologies, the long-term consequences that the use of generative AI can have on the human capability to make trust decisions in other contexts, including interpersonal relations, have not been considered. We analyze this development using the functionalist lens of a general trust model and deconstruct the potential loss of the human ability to make informed and reasoned trust decisions. From our observations and conclusions, we derive a first set of recommendations to increase the awareness of the underlying threats, laying the foundation for a more substantive analysis of the opportunities and threats of delegating educative, cognitive, and knowledge-centric tasks to unrestricted automation.","PeriodicalId":55016,"journal":{"name":"IEEE Technology and Society Magazine","volume":"44 3","pages":"30-37"},"PeriodicalIF":1.9000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11163569","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Technology and Society Magazine","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11163569/","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The advent of large language models (LLMs), their sudden popularity, and their extensive use by an unprepared and, therefore, unskilled public raises profound questions about the societal consequences that this might have on both the individual and collective levels. In particular, the benefits of a marginal increase in productivity are offset by the potential for widespread cognitive deskilling or nonskilling. While there has been much discussion about the trust relationship between humans and generative AI technologies, the long-term consequences that the use of generative AI can have on the human capability to make trust decisions in other contexts, including interpersonal relations, have not been considered. We analyze this development using the functionalist lens of a general trust model and deconstruct the potential loss of the human ability to make informed and reasoned trust decisions. From our observations and conclusions, we derive a first set of recommendations to increase the awareness of the underlying threats, laying the foundation for a more substantive analysis of the opportunities and threats of delegating educative, cognitive, and knowledge-centric tasks to unrestricted automation.
期刊介绍:
IEEE Technology and Society Magazine invites feature articles (refereed), special articles, and commentaries on topics within the scope of the IEEE Society on Social Implications of Technology, in the broad areas of social implications of electrotechnology, history of electrotechnology, and engineering ethics.