Xinran Wang , Zisu Wang , Mateusz Dolata , Jay F. Nunamaker
{"title":"How credibility assessment technologies affect decision fairness in evidence-based investigations: A Bayesian perspective","authors":"Xinran Wang , Zisu Wang , Mateusz Dolata , Jay F. Nunamaker","doi":"10.1016/j.dss.2024.114326","DOIUrl":null,"url":null,"abstract":"<div><p>Recently, a growing number of credibility assessment technologies (CATs) have been developed to assist human decision-making processes in evidence-based investigations, such as criminal investigations, financial fraud detection, and insurance claim verification. Despite the widespread adoption of CATs, it remains unclear how CAT and human biases interact during the evidence-collection procedure and affect the fairness of investigation outcomes. To address this gap, we develop a Bayesian framework to model CAT adoption and the iterative collection and interpretation of evidence in investigations. Based on the Bayesian framework, we further conduct simulations to examine how CATs affect investigation fairness with various configurations of evidence effectiveness, CAT effectiveness, human biases, technological biases, and decision stakes. We find that when investigators are unconscious of their own biases, CAT adoption generally increases the fairness of investigation outcomes if the CAT is more effective than evidence and less biased than the investigators. However, the CATs' positive influence on fairness diminishes as humans become aware of their own biases. Our results show that CATs' impact on decision fairness highly depends on various technological, human, and contextual factors. We further discuss the implications for CAT development, evaluation, and adoption based on our findings.</p></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"187 ","pages":"Article 114326"},"PeriodicalIF":6.7000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Support Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167923624001593","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, a growing number of credibility assessment technologies (CATs) have been developed to assist human decision-making processes in evidence-based investigations, such as criminal investigations, financial fraud detection, and insurance claim verification. Despite the widespread adoption of CATs, it remains unclear how CAT and human biases interact during the evidence-collection procedure and affect the fairness of investigation outcomes. To address this gap, we develop a Bayesian framework to model CAT adoption and the iterative collection and interpretation of evidence in investigations. Based on the Bayesian framework, we further conduct simulations to examine how CATs affect investigation fairness with various configurations of evidence effectiveness, CAT effectiveness, human biases, technological biases, and decision stakes. We find that when investigators are unconscious of their own biases, CAT adoption generally increases the fairness of investigation outcomes if the CAT is more effective than evidence and less biased than the investigators. However, the CATs' positive influence on fairness diminishes as humans become aware of their own biases. Our results show that CATs' impact on decision fairness highly depends on various technological, human, and contextual factors. We further discuss the implications for CAT development, evaluation, and adoption based on our findings.
期刊介绍:
The common thread of articles published in Decision Support Systems is their relevance to theoretical and technical issues in the support of enhanced decision making. The areas addressed may include foundations, functionality, interfaces, implementation, impacts, and evaluation of decision support systems (DSSs).