{"title":"HEX: Human-in-the-loop explainability via deep reinforcement learning","authors":"Michael T. Lash","doi":"10.1016/j.dss.2024.114304","DOIUrl":null,"url":null,"abstract":"<div><div>The use of machine learning (ML) models in decision-making contexts, particularly those used in high-stakes decision-making, are fraught with issue and peril since a person – not a machine – must ultimately be held accountable for the consequences of decisions made using such systems. Machine learning explainability (MLX) promises to provide decision-makers with prediction-specific rationale, assuring them that the model-elicited predictions are made <em>for the right reasons</em> and are thus reliable. Few works explicitly consider this key human-in-the-loop (HITL) component, however. In this work we propose HEX, a human-in-the-loop deep reinforcement learning approach to MLX. HEX incorporates 0-distrust projection to synthesize decider-specific explainers that produce explanations strictly in terms of a decider’s preferred explanatory features using any classification model. Our formulation explicitly considers the decision boundary of the ML model in question using a proposed <em>explanatory point</em> mode of explanation, thus ensuring explanations are specific to the ML model in question. We empirically evaluate HEX against other competing methods, finding that HEX is competitive with the state-of-the-art and outperforms other methods in human-in-the-loop scenarios. We conduct a randomized, controlled laboratory experiment utilizing actual explanations elicited from both HEX and competing methods. We causally establish that our method increases decider’s trust and tendency to rely on trusted features.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"187 ","pages":"Article 114304"},"PeriodicalIF":6.7000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Support Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167923624001374","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The use of machine learning (ML) models in decision-making contexts, particularly those used in high-stakes decision-making, are fraught with issue and peril since a person – not a machine – must ultimately be held accountable for the consequences of decisions made using such systems. Machine learning explainability (MLX) promises to provide decision-makers with prediction-specific rationale, assuring them that the model-elicited predictions are made for the right reasons and are thus reliable. Few works explicitly consider this key human-in-the-loop (HITL) component, however. In this work we propose HEX, a human-in-the-loop deep reinforcement learning approach to MLX. HEX incorporates 0-distrust projection to synthesize decider-specific explainers that produce explanations strictly in terms of a decider’s preferred explanatory features using any classification model. Our formulation explicitly considers the decision boundary of the ML model in question using a proposed explanatory point mode of explanation, thus ensuring explanations are specific to the ML model in question. We empirically evaluate HEX against other competing methods, finding that HEX is competitive with the state-of-the-art and outperforms other methods in human-in-the-loop scenarios. We conduct a randomized, controlled laboratory experiment utilizing actual explanations elicited from both HEX and competing methods. We causally establish that our method increases decider’s trust and tendency to rely on trusted features.
期刊介绍:
The common thread of articles published in Decision Support Systems is their relevance to theoretical and technical issues in the support of enhanced decision making. The areas addressed may include foundations, functionality, interfaces, implementation, impacts, and evaluation of decision support systems (DSSs).