Explainable AI for government: Does the type of explanation matter to the accuracy, fairness, and trustworthiness of an algorithmic decision as perceived by those who are affected?
IF 7.8 1区 管理学Q1 INFORMATION SCIENCE & LIBRARY SCIENCE
{"title":"Explainable AI for government: Does the type of explanation matter to the accuracy, fairness, and trustworthiness of an algorithmic decision as perceived by those who are affected?","authors":"Naomi Aoki , Tomohiko Tatsumi , Go Naruse , Kentaro Maeda","doi":"10.1016/j.giq.2024.101965","DOIUrl":null,"url":null,"abstract":"<div><p>Amidst concerns over biased and misguided government decisions arrived at through algorithmic treatment, it is important for members of society to be able to perceive that public authorities are making fair, accurate, and trustworthy decisions. Inspired in part by equity and procedural justice theories and by theories of attitudes towards technologies, we posited that the perception of these attributes of decisions is influenced by the type of explanation offered, which can be input-based, group-based, case-based, or counterfactual. We tested our hypotheses with two studies, each of which involved a pre-registered online survey experiment conducted in December 2022. In both studies, the subjects (<em>N</em> = 1200) were officers in high positions at stock companies registered in Japan, who were presented with a scenario consisting of an algorithmic decision made by a public authority: a ministry's decision to reject a grant application from their company (Study 1) and a tax authority's decision to select their company for an on-site tax inspection (Study 2). The studies revealed that offering the subjects some type of explanation had a positive effect on their attitude towards a decision, to various extents, although the detailed results of the two studies are not robust. These findings call for a nuanced inquiry, both in research and practice, into how to best design explanations of algorithmic decisions from societal and human-centric perspectives in different decision-making contexts.</p></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"41 4","pages":"Article 101965"},"PeriodicalIF":7.8000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0740624X24000571/pdfft?md5=79e232415d01bd4e88037e2540f7bb9f&pid=1-s2.0-S0740624X24000571-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Government Information Quarterly","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0740624X24000571","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Amidst concerns over biased and misguided government decisions arrived at through algorithmic treatment, it is important for members of society to be able to perceive that public authorities are making fair, accurate, and trustworthy decisions. Inspired in part by equity and procedural justice theories and by theories of attitudes towards technologies, we posited that the perception of these attributes of decisions is influenced by the type of explanation offered, which can be input-based, group-based, case-based, or counterfactual. We tested our hypotheses with two studies, each of which involved a pre-registered online survey experiment conducted in December 2022. In both studies, the subjects (N = 1200) were officers in high positions at stock companies registered in Japan, who were presented with a scenario consisting of an algorithmic decision made by a public authority: a ministry's decision to reject a grant application from their company (Study 1) and a tax authority's decision to select their company for an on-site tax inspection (Study 2). The studies revealed that offering the subjects some type of explanation had a positive effect on their attitude towards a decision, to various extents, although the detailed results of the two studies are not robust. These findings call for a nuanced inquiry, both in research and practice, into how to best design explanations of algorithmic decisions from societal and human-centric perspectives in different decision-making contexts.
期刊介绍:
Government Information Quarterly (GIQ) delves into the convergence of policy, information technology, government, and the public. It explores the impact of policies on government information flows, the role of technology in innovative government services, and the dynamic between citizens and governing bodies in the digital age. GIQ serves as a premier journal, disseminating high-quality research and insights that bridge the realms of policy, information technology, government, and public engagement.