利用任务背景和解释调整判断,提高人类推荐系统的性能

Divya Srivastava, Karen M. Feigh
{"title":"利用任务背景和解释调整判断,提高人类推荐系统的性能","authors":"Divya Srivastava, Karen M. Feigh","doi":"arxiv-2409.10717","DOIUrl":null,"url":null,"abstract":"Recommender systems, while a powerful decision making tool, are often\noperationalized as black box models, such that their AI algorithms are not\naccessible or interpretable by human operators. This in turn can cause\nconfusion and frustration for the operator and result in unsatisfactory\noutcomes. While the field of explainable AI has made remarkable strides in\naddressing this challenge by focusing on interpreting and explaining the\nalgorithms to human operators, there are remaining gaps in the human's\nunderstanding of the recommender system. This paper investigates the relative\nimpact of using context, properties of the decision making task and\nenvironment, to align human and AI algorithm understanding of the state of the\nworld, i.e. judgment, to improve joint human-recommender performance as\ncompared to utilizing post-hoc algorithmic explanations. We conducted an\nempirical, between-subjects experiment in which participants were asked to work\nwith an automated recommender system to complete a decision making task. We\nmanipulated the method of transparency (shared contextual information to\nsupport shared judgment vs algorithmic explanations) and record the human's\nunderstanding of the task, the recommender system, and their overall\nperformance. We found that both techniques yielded equivalent agreement on\nfinal decisions. However, those who saw task context had less tendency to\nover-rely on the recommender system and were able to better pinpoint in what\nconditions the AI erred. Both methods improved participants' confidence in\ntheir own decision making, and increased mental demand equally and frustration\nnegligibly. These results present an alternative approach to improving team\nperformance to post-hoc explanations and illustrate the impact of judgment on\nhuman cognition in working with recommender systems.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"65 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Aligning Judgment Using Task Context and Explanations to Improve Human-Recommender System Performance\",\"authors\":\"Divya Srivastava, Karen M. Feigh\",\"doi\":\"arxiv-2409.10717\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recommender systems, while a powerful decision making tool, are often\\noperationalized as black box models, such that their AI algorithms are not\\naccessible or interpretable by human operators. This in turn can cause\\nconfusion and frustration for the operator and result in unsatisfactory\\noutcomes. While the field of explainable AI has made remarkable strides in\\naddressing this challenge by focusing on interpreting and explaining the\\nalgorithms to human operators, there are remaining gaps in the human's\\nunderstanding of the recommender system. This paper investigates the relative\\nimpact of using context, properties of the decision making task and\\nenvironment, to align human and AI algorithm understanding of the state of the\\nworld, i.e. judgment, to improve joint human-recommender performance as\\ncompared to utilizing post-hoc algorithmic explanations. We conducted an\\nempirical, between-subjects experiment in which participants were asked to work\\nwith an automated recommender system to complete a decision making task. We\\nmanipulated the method of transparency (shared contextual information to\\nsupport shared judgment vs algorithmic explanations) and record the human's\\nunderstanding of the task, the recommender system, and their overall\\nperformance. We found that both techniques yielded equivalent agreement on\\nfinal decisions. However, those who saw task context had less tendency to\\nover-rely on the recommender system and were able to better pinpoint in what\\nconditions the AI erred. Both methods improved participants' confidence in\\ntheir own decision making, and increased mental demand equally and frustration\\nnegligibly. These results present an alternative approach to improving team\\nperformance to post-hoc explanations and illustrate the impact of judgment on\\nhuman cognition in working with recommender systems.\",\"PeriodicalId\":501541,\"journal\":{\"name\":\"arXiv - CS - Human-Computer Interaction\",\"volume\":\"65 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10717\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10717","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

推荐系统虽然是一种功能强大的决策工具,但在操作上往往是黑盒模型,人工智能算法无法被人类操作员使用或解释。这反过来又会给操作者带来困惑和挫败感,并导致令人不满意的结果。虽然可解释人工智能领域在应对这一挑战方面取得了显著进展,重点是向人类操作员解释和说明算法,但人类对推荐系统的理解仍然存在差距。与利用事后算法解释相比,本文研究了利用上下文、决策任务和环境的属性来协调人类和人工智能算法对世界状况的理解(即判断),从而提高人类与推荐器联合性能的相对影响。我们进行了一项主体间实证实验,要求参与者与自动推荐系统合作完成一项决策制定任务。我们改变了透明度的方法(共享上下文信息以支持共同判断与算法解释),并记录了人类对任务、推荐系统的理解以及他们的总体表现。我们发现,这两种技术在最终决策上的一致性相当。然而,看到任务背景的人不太倾向于过度依赖推荐系统,他们能够更好地指出人工智能在哪些条件下出错。这两种方法都提高了参与者对自己决策的信心,同时也同样增加了心理需求和挫折感。这些结果提供了一种替代事后解释的提高团队绩效的方法,并说明了在使用推荐系统时判断力对人类认知的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Aligning Judgment Using Task Context and Explanations to Improve Human-Recommender System Performance
Recommender systems, while a powerful decision making tool, are often operationalized as black box models, such that their AI algorithms are not accessible or interpretable by human operators. This in turn can cause confusion and frustration for the operator and result in unsatisfactory outcomes. While the field of explainable AI has made remarkable strides in addressing this challenge by focusing on interpreting and explaining the algorithms to human operators, there are remaining gaps in the human's understanding of the recommender system. This paper investigates the relative impact of using context, properties of the decision making task and environment, to align human and AI algorithm understanding of the state of the world, i.e. judgment, to improve joint human-recommender performance as compared to utilizing post-hoc algorithmic explanations. We conducted an empirical, between-subjects experiment in which participants were asked to work with an automated recommender system to complete a decision making task. We manipulated the method of transparency (shared contextual information to support shared judgment vs algorithmic explanations) and record the human's understanding of the task, the recommender system, and their overall performance. We found that both techniques yielded equivalent agreement on final decisions. However, those who saw task context had less tendency to over-rely on the recommender system and were able to better pinpoint in what conditions the AI erred. Both methods improved participants' confidence in their own decision making, and increased mental demand equally and frustration negligibly. These results present an alternative approach to improving team performance to post-hoc explanations and illustrate the impact of judgment on human cognition in working with recommender systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信