人类何时听从人工智能代理的建议?何时应该听从?

IF 2.9 3区 心理学 Q1 BEHAVIORAL SCIENCES
Human Factors Pub Date : 2024-07-01 Epub Date: 2023-08-08 DOI:10.1177/00187208231190459
Richard E Dunning, Baruch Fischhoff, Alex L Davis
{"title":"人类何时听从人工智能代理的建议?何时应该听从?","authors":"Richard E Dunning, Baruch Fischhoff, Alex L Davis","doi":"10.1177/00187208231190459","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>We manipulate the presence, skill, and display of artificial intelligence (AI) recommendations in a strategy game to measure their effect on users' performance.</p><p><strong>Background: </strong>Many applications of AI require humans and AI agents to make decisions collaboratively. Success depends on how appropriately humans rely on the AI agent. We demonstrate an evaluation method for a platform that uses neural network agents of varying skill levels for the simple strategic game of Connect Four.</p><p><strong>Methods: </strong>We report results from a 2 × 3 between-subjects factorial experiment that varies the format of AI recommendations (categorical or probabilistic) and the AI agent's amount of training (low, medium, or high). On each round of 10 games, participants proposed a move, saw the AI agent's recommendations, and then moved.</p><p><strong>Results: </strong>Participants' performance improved with a highly skilled agent, but quickly plateaued, as they relied uncritically on the agent. Participants relied too little on lower skilled agents. The display format had no effect on users' skill or choices.</p><p><strong>Conclusions: </strong>The value of these AI agents depended on their skill level and users' ability to extract lessons from their advice.</p><p><strong>Application: </strong>Organizations employing AI decision support systems must consider behavioral aspects of the human-agent team. We demonstrate an approach to evaluating competing designs and assessing their performance.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11089830/pdf/","citationCount":"0","resultStr":"{\"title\":\"When Do Humans Heed AI Agents' Advice? When Should They?\",\"authors\":\"Richard E Dunning, Baruch Fischhoff, Alex L Davis\",\"doi\":\"10.1177/00187208231190459\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>We manipulate the presence, skill, and display of artificial intelligence (AI) recommendations in a strategy game to measure their effect on users' performance.</p><p><strong>Background: </strong>Many applications of AI require humans and AI agents to make decisions collaboratively. Success depends on how appropriately humans rely on the AI agent. We demonstrate an evaluation method for a platform that uses neural network agents of varying skill levels for the simple strategic game of Connect Four.</p><p><strong>Methods: </strong>We report results from a 2 × 3 between-subjects factorial experiment that varies the format of AI recommendations (categorical or probabilistic) and the AI agent's amount of training (low, medium, or high). On each round of 10 games, participants proposed a move, saw the AI agent's recommendations, and then moved.</p><p><strong>Results: </strong>Participants' performance improved with a highly skilled agent, but quickly plateaued, as they relied uncritically on the agent. Participants relied too little on lower skilled agents. The display format had no effect on users' skill or choices.</p><p><strong>Conclusions: </strong>The value of these AI agents depended on their skill level and users' ability to extract lessons from their advice.</p><p><strong>Application: </strong>Organizations employing AI decision support systems must consider behavioral aspects of the human-agent team. We demonstrate an approach to evaluating competing designs and assessing their performance.</p>\",\"PeriodicalId\":56333,\"journal\":{\"name\":\"Human Factors\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11089830/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Factors\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/00187208231190459\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/8/8 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208231190459","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/8 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

目的:我们在一款策略游戏中操纵人工智能(AI)推荐的存在、技能和显示:我们在策略游戏中操纵人工智能(AI)推荐的存在、技能和显示,以衡量它们对用户表现的影响:背景:人工智能的许多应用都需要人类和人工智能代理协同决策。成功与否取决于人类对人工智能代理的依赖程度。我们为一个平台演示了一种评估方法,该平台使用不同技能水平的神经网络代理来进行简单的策略游戏 "四连棋":我们报告了一个 2 × 3 主体间因子实验的结果,该实验改变了人工智能推荐的形式(分类或概率)和人工智能代理的训练量(低、中或高)。在每轮 10 局游戏中,参与者提出下一步棋,看到人工智能代理的建议,然后下棋:结果:在使用高技能代理的情况下,参与者的成绩有所提高,但很快就趋于平稳,因为他们不加批判地依赖代理。参与者对技术水平较低的人工智能代理的依赖性太低。显示格式对用户的技能或选择没有影响:这些人工智能代理的价值取决于其技能水平和用户从其建议中吸取经验教训的能力:应用:采用人工智能决策支持系统的组织必须考虑人类-代理团队的行为方面。我们展示了一种评估相互竞争的设计及其性能的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
When Do Humans Heed AI Agents' Advice? When Should They?

Objective: We manipulate the presence, skill, and display of artificial intelligence (AI) recommendations in a strategy game to measure their effect on users' performance.

Background: Many applications of AI require humans and AI agents to make decisions collaboratively. Success depends on how appropriately humans rely on the AI agent. We demonstrate an evaluation method for a platform that uses neural network agents of varying skill levels for the simple strategic game of Connect Four.

Methods: We report results from a 2 × 3 between-subjects factorial experiment that varies the format of AI recommendations (categorical or probabilistic) and the AI agent's amount of training (low, medium, or high). On each round of 10 games, participants proposed a move, saw the AI agent's recommendations, and then moved.

Results: Participants' performance improved with a highly skilled agent, but quickly plateaued, as they relied uncritically on the agent. Participants relied too little on lower skilled agents. The display format had no effect on users' skill or choices.

Conclusions: The value of these AI agents depended on their skill level and users' ability to extract lessons from their advice.

Application: Organizations employing AI decision support systems must consider behavioral aspects of the human-agent team. We demonstrate an approach to evaluating competing designs and assessing their performance.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human Factors
Human Factors 管理科学-行为科学
CiteScore
10.60
自引率
6.10%
发文量
99
审稿时长
6-12 weeks
期刊介绍: Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信