HEX:通过深度强化学习实现人在回路中的可解释性

IF 6.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Michael T. Lash
{"title":"HEX:通过深度强化学习实现人在回路中的可解释性","authors":"Michael T. Lash","doi":"10.1016/j.dss.2024.114304","DOIUrl":null,"url":null,"abstract":"<div><div>The use of machine learning (ML) models in decision-making contexts, particularly those used in high-stakes decision-making, are fraught with issue and peril since a person – not a machine – must ultimately be held accountable for the consequences of decisions made using such systems. Machine learning explainability (MLX) promises to provide decision-makers with prediction-specific rationale, assuring them that the model-elicited predictions are made <em>for the right reasons</em> and are thus reliable. Few works explicitly consider this key human-in-the-loop (HITL) component, however. In this work we propose HEX, a human-in-the-loop deep reinforcement learning approach to MLX. HEX incorporates 0-distrust projection to synthesize decider-specific explainers that produce explanations strictly in terms of a decider’s preferred explanatory features using any classification model. Our formulation explicitly considers the decision boundary of the ML model in question using a proposed <em>explanatory point</em> mode of explanation, thus ensuring explanations are specific to the ML model in question. We empirically evaluate HEX against other competing methods, finding that HEX is competitive with the state-of-the-art and outperforms other methods in human-in-the-loop scenarios. We conduct a randomized, controlled laboratory experiment utilizing actual explanations elicited from both HEX and competing methods. We causally establish that our method increases decider’s trust and tendency to rely on trusted features.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"187 ","pages":"Article 114304"},"PeriodicalIF":6.7000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HEX: Human-in-the-loop explainability via deep reinforcement learning\",\"authors\":\"Michael T. Lash\",\"doi\":\"10.1016/j.dss.2024.114304\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The use of machine learning (ML) models in decision-making contexts, particularly those used in high-stakes decision-making, are fraught with issue and peril since a person – not a machine – must ultimately be held accountable for the consequences of decisions made using such systems. Machine learning explainability (MLX) promises to provide decision-makers with prediction-specific rationale, assuring them that the model-elicited predictions are made <em>for the right reasons</em> and are thus reliable. Few works explicitly consider this key human-in-the-loop (HITL) component, however. In this work we propose HEX, a human-in-the-loop deep reinforcement learning approach to MLX. HEX incorporates 0-distrust projection to synthesize decider-specific explainers that produce explanations strictly in terms of a decider’s preferred explanatory features using any classification model. Our formulation explicitly considers the decision boundary of the ML model in question using a proposed <em>explanatory point</em> mode of explanation, thus ensuring explanations are specific to the ML model in question. We empirically evaluate HEX against other competing methods, finding that HEX is competitive with the state-of-the-art and outperforms other methods in human-in-the-loop scenarios. We conduct a randomized, controlled laboratory experiment utilizing actual explanations elicited from both HEX and competing methods. We causally establish that our method increases decider’s trust and tendency to rely on trusted features.</div></div>\",\"PeriodicalId\":55181,\"journal\":{\"name\":\"Decision Support Systems\",\"volume\":\"187 \",\"pages\":\"Article 114304\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Decision Support Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167923624001374\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Support Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167923624001374","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在决策环境中使用机器学习(ML)模型,尤其是那些用于高风险决策的模型,充满了问题和危险,因为最终必须由人--而不是机器--来对使用此类系统所做决策的后果负责。机器学习的可解释性(MLX)有望为决策者提供预测的具体理由,确保他们相信由模型引发的预测是出于正确的原因,因而是可靠的。然而,很少有作品明确考虑到这一关键的 "人在回路中"(HITL)要素。在这项工作中,我们提出了 HEX,一种针对 MLX 的人在环深度强化学习方法。HEX 结合了 0 不信任投射,可合成针对决策者的解释器,严格按照决策者的首选解释特征,使用任何分类模型生成解释。我们的表述明确考虑了相关 ML 模型的决策边界,使用了建议的解释点解释模式,从而确保解释是针对相关 ML 模型的。我们对 HEX 与其他竞争方法进行了实证评估,发现 HEX 与最先进的方法相比具有竞争力,在人类在环场景中的表现优于其他方法。我们利用 HEX 和其他竞争方法得出的实际解释进行了随机对照实验室实验。我们从因果关系上证实,我们的方法提高了决定者的信任度,并倾向于依赖可信的特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
HEX: Human-in-the-loop explainability via deep reinforcement learning
The use of machine learning (ML) models in decision-making contexts, particularly those used in high-stakes decision-making, are fraught with issue and peril since a person – not a machine – must ultimately be held accountable for the consequences of decisions made using such systems. Machine learning explainability (MLX) promises to provide decision-makers with prediction-specific rationale, assuring them that the model-elicited predictions are made for the right reasons and are thus reliable. Few works explicitly consider this key human-in-the-loop (HITL) component, however. In this work we propose HEX, a human-in-the-loop deep reinforcement learning approach to MLX. HEX incorporates 0-distrust projection to synthesize decider-specific explainers that produce explanations strictly in terms of a decider’s preferred explanatory features using any classification model. Our formulation explicitly considers the decision boundary of the ML model in question using a proposed explanatory point mode of explanation, thus ensuring explanations are specific to the ML model in question. We empirically evaluate HEX against other competing methods, finding that HEX is competitive with the state-of-the-art and outperforms other methods in human-in-the-loop scenarios. We conduct a randomized, controlled laboratory experiment utilizing actual explanations elicited from both HEX and competing methods. We causally establish that our method increases decider’s trust and tendency to rely on trusted features.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Decision Support Systems
Decision Support Systems 工程技术-计算机:人工智能
CiteScore
14.70
自引率
6.70%
发文量
119
审稿时长
13 months
期刊介绍: The common thread of articles published in Decision Support Systems is their relevance to theoretical and technical issues in the support of enhanced decision making. The areas addressed may include foundations, functionality, interfaces, implementation, impacts, and evaluation of decision support systems (DSSs).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信