{"title":"对人工智能的信任和依赖--关于过度依赖人工智能的程度和代价的实验研究","authors":"","doi":"10.1016/j.chb.2024.108352","DOIUrl":null,"url":null,"abstract":"<div><p>Decision-making is undergoing rapid changes due to the introduction of artificial intelligence (AI), as AI recommender systems can help mitigate human flaws and increase decision accuracy and efficiency. However, AI can also commit errors or suffer from algorithmic bias. Hence, blind trust in technologies carries risks, as users may follow detrimental advice resulting in undesired consequences. Building upon research on algorithm appreciation and trust in AI, the current study investigates whether users who receive AI advice in an uncertain situation overrely on this advice — to their own detriment and that of other parties. In a domain-independent, incentivized, and interactive behavioral experiment, we find that the mere knowledge of advice being generated by an AI causes people to overrely on it, that is, to follow AI advice even when it contradicts available contextual information as well as their own assessment. Frequently, this overreliance leads not only to inefficient outcomes for the advisee, but also to undesired effects regarding third parties. The results call into question how AI is being used in assisted decision making, emphasizing the importance of AI literacy and effective trust calibration for productive deployment of such systems.</p></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":null,"pages":null},"PeriodicalIF":9.0000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0747563224002206/pdfft?md5=299a70f444225bc45709fbd6ca8a93f1&pid=1-s2.0-S0747563224002206-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Trust and reliance on AI — An experimental study on the extent and costs of overreliance on AI\",\"authors\":\"\",\"doi\":\"10.1016/j.chb.2024.108352\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Decision-making is undergoing rapid changes due to the introduction of artificial intelligence (AI), as AI recommender systems can help mitigate human flaws and increase decision accuracy and efficiency. However, AI can also commit errors or suffer from algorithmic bias. Hence, blind trust in technologies carries risks, as users may follow detrimental advice resulting in undesired consequences. Building upon research on algorithm appreciation and trust in AI, the current study investigates whether users who receive AI advice in an uncertain situation overrely on this advice — to their own detriment and that of other parties. In a domain-independent, incentivized, and interactive behavioral experiment, we find that the mere knowledge of advice being generated by an AI causes people to overrely on it, that is, to follow AI advice even when it contradicts available contextual information as well as their own assessment. Frequently, this overreliance leads not only to inefficient outcomes for the advisee, but also to undesired effects regarding third parties. The results call into question how AI is being used in assisted decision making, emphasizing the importance of AI literacy and effective trust calibration for productive deployment of such systems.</p></div>\",\"PeriodicalId\":48471,\"journal\":{\"name\":\"Computers in Human Behavior\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2024-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0747563224002206/pdfft?md5=299a70f444225bc45709fbd6ca8a93f1&pid=1-s2.0-S0747563224002206-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0747563224002206\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747563224002206","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
Trust and reliance on AI — An experimental study on the extent and costs of overreliance on AI
Decision-making is undergoing rapid changes due to the introduction of artificial intelligence (AI), as AI recommender systems can help mitigate human flaws and increase decision accuracy and efficiency. However, AI can also commit errors or suffer from algorithmic bias. Hence, blind trust in technologies carries risks, as users may follow detrimental advice resulting in undesired consequences. Building upon research on algorithm appreciation and trust in AI, the current study investigates whether users who receive AI advice in an uncertain situation overrely on this advice — to their own detriment and that of other parties. In a domain-independent, incentivized, and interactive behavioral experiment, we find that the mere knowledge of advice being generated by an AI causes people to overrely on it, that is, to follow AI advice even when it contradicts available contextual information as well as their own assessment. Frequently, this overreliance leads not only to inefficient outcomes for the advisee, but also to undesired effects regarding third parties. The results call into question how AI is being used in assisted decision making, emphasizing the importance of AI literacy and effective trust calibration for productive deployment of such systems.
期刊介绍:
Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.