Reda Hassan , Nhien Nguyen , Stine Rasdal Finserås , Lars Adde , Inga Strümke , Ragnhild Støen
{"title":"Unlocking the black box: Enhancing human-AI collaboration in high-stakes healthcare scenarios through explainable AI","authors":"Reda Hassan , Nhien Nguyen , Stine Rasdal Finserås , Lars Adde , Inga Strümke , Ragnhild Støen","doi":"10.1016/j.techfore.2025.124265","DOIUrl":null,"url":null,"abstract":"<div><div>Despite the advanced predictive capabilities of artificial intelligence (AI) systems, their inherent opacity often leaves users confused about the rationale behind their outputs. We investigate the challenge of AI opacity, which undermines user trust and the effectiveness of clinical judgment in healthcare. We demonstrate how human experts make judgment in high-stakes scenarios where their judgment diverges from AI predictions, emphasizing the need for explainability to enhance clinical judgment and trust in AI systems. We used a scenario-based methodology, conducting 28 semi-structured interviews and observations with clinicians from Norway and Egypt. Our analysis revealed that, during the process of forming judgments, human experts engage in AI interrogation practices when faced with opaque AI systems. Obtaining explainability from AI systems leads to increased interrogation practices aimed at gaining a deeper understanding of AI predictions. With the introduction of explainable AI (XAI), experts demonstrate greater trust in the AI system, show a readiness to learn from AI, and may reconsider or update their initial judgments when they contradict AI predictions.</div></div>","PeriodicalId":48454,"journal":{"name":"Technological Forecasting and Social Change","volume":"219 ","pages":"Article 124265"},"PeriodicalIF":13.3000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technological Forecasting and Social Change","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0040162525002963","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0
Abstract
Despite the advanced predictive capabilities of artificial intelligence (AI) systems, their inherent opacity often leaves users confused about the rationale behind their outputs. We investigate the challenge of AI opacity, which undermines user trust and the effectiveness of clinical judgment in healthcare. We demonstrate how human experts make judgment in high-stakes scenarios where their judgment diverges from AI predictions, emphasizing the need for explainability to enhance clinical judgment and trust in AI systems. We used a scenario-based methodology, conducting 28 semi-structured interviews and observations with clinicians from Norway and Egypt. Our analysis revealed that, during the process of forming judgments, human experts engage in AI interrogation practices when faced with opaque AI systems. Obtaining explainability from AI systems leads to increased interrogation practices aimed at gaining a deeper understanding of AI predictions. With the introduction of explainable AI (XAI), experts demonstrate greater trust in the AI system, show a readiness to learn from AI, and may reconsider or update their initial judgments when they contradict AI predictions.
期刊介绍:
Technological Forecasting and Social Change is a prominent platform for individuals engaged in the methodology and application of technological forecasting and future studies as planning tools, exploring the interconnectedness of social, environmental, and technological factors.
In addition to serving as a key forum for these discussions, we offer numerous benefits for authors, including complimentary PDFs, a generous copyright policy, exclusive discounts on Elsevier publications, and more.