Accounting for Human Engagement Behavior to Enhance AI-Assisted Decision Making

Ming Yin
{"title":"Accounting for Human Engagement Behavior to Enhance AI-Assisted Decision Making","authors":"Ming Yin","doi":"10.1609/aaaiss.v3i1.31184","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) technologies have been increasingly integrated into human workflows. For example, the usage of AI-based decision aids in human decision-making processes has resulted in a new paradigm of AI-assisted decision making---that is, the AI-based decision aid provides a decision recommendation to the human decision makers, while humans make the final decision. The increasing prevalence of human-AI collaborative decision making highlights the need to understand how humans engage with the AI-based decision aid in these decision-making processes, and how to promote the effectiveness of the human-AI team in decision making. In this talk, I'll discuss a few examples illustrating that when AI is used to assist humans---both an individual decision maker or a group of decision makers---in decision making, people's engagement with the AI assistance is largely subject to their heuristics and biases, rather than careful deliberation of the respective strengths and limitations of AI and themselves. I'll then describe how to enhance AI-assisted decision making by accounting for human engagement behavior in the designs of AI-based decision aids. For example, AI recommendations can be presented to decision makers in a way that promotes their appropriate trust and reliance on AI by leveraging or mitigating human biases, informed by the analysis of human competence in decision making. Alternatively, AI-assisted decision making can be improved by developing AI models that can anticipate and adapt to the engagement behavior of human decision makers.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"11 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI Symposium Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaaiss.v3i1.31184","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) technologies have been increasingly integrated into human workflows. For example, the usage of AI-based decision aids in human decision-making processes has resulted in a new paradigm of AI-assisted decision making---that is, the AI-based decision aid provides a decision recommendation to the human decision makers, while humans make the final decision. The increasing prevalence of human-AI collaborative decision making highlights the need to understand how humans engage with the AI-based decision aid in these decision-making processes, and how to promote the effectiveness of the human-AI team in decision making. In this talk, I'll discuss a few examples illustrating that when AI is used to assist humans---both an individual decision maker or a group of decision makers---in decision making, people's engagement with the AI assistance is largely subject to their heuristics and biases, rather than careful deliberation of the respective strengths and limitations of AI and themselves. I'll then describe how to enhance AI-assisted decision making by accounting for human engagement behavior in the designs of AI-based decision aids. For example, AI recommendations can be presented to decision makers in a way that promotes their appropriate trust and reliance on AI by leveraging or mitigating human biases, informed by the analysis of human competence in decision making. Alternatively, AI-assisted decision making can be improved by developing AI models that can anticipate and adapt to the engagement behavior of human decision makers.
考虑人类参与行为,加强人工智能辅助决策
人工智能(AI)技术日益融入人类的工作流程。例如,在人类决策过程中使用基于人工智能的决策辅助工具已经形成了一种新的人工智能辅助决策模式--即基于人工智能的决策辅助工具向人类决策者提供决策建议,而人类则做出最终决策。人类与人工智能协作决策的日益普遍,凸显了人们需要了解在这些决策过程中人类如何与基于人工智能的决策辅助工具互动,以及如何提高人类与人工智能团队在决策中的效率。在本讲座中,我将讨论几个例子,说明当人工智能被用于辅助人类--无论是单个决策者还是群体决策者--进行决策时,人们对人工智能辅助的参与在很大程度上取决于他们的启发式思维和偏见,而不是仔细斟酌人工智能和他们各自的优势和局限性。接下来,我将介绍如何在设计人工智能辅助决策时考虑到人类的参与行为,从而增强人工智能辅助决策的效果。例如,通过分析人类在决策方面的能力,可以利用或减轻人类的偏见,以促进决策者对人工智能的适当信任和依赖的方式向决策者提出人工智能建议。另外,还可以通过开发能够预测和适应人类决策者参与行为的人工智能模型来改进人工智能辅助决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信