Florien S van Royen, Hilde J P Weerts, Anne A H de Hond, Geert-Jan Geersing, Frans H Rutten, Karel G M Moons, Maarten van Smeden
{"title":"为医疗领域无法解释的黑箱预测模型进行谦逊的辩护。","authors":"Florien S van Royen, Hilde J P Weerts, Anne A H de Hond, Geert-Jan Geersing, Frans H Rutten, Karel G M Moons, Maarten van Smeden","doi":"10.1016/j.jclinepi.2025.112013","DOIUrl":null,"url":null,"abstract":"<p><p>The increasing complexity of prediction models for healthcare purposes - whether developed with or without artificial intelligence (AI) techniques - drives the urge to open complex 'black box' models using eXplainable AI (XAI) techniques. In this paper, we argue that XAI may not necessarily provide insights relevant to decision-making in the medical setting and can lead to misplaced trust and misinterpretation of the model's usability. An important limitation of XAI is the difficulty in avoiding causal interpretation, which may result in confirmation bias or false dismissal of the model when explanations conflict with clinical knowledge. Rather than expecting XAI to generate trust in black box prediction models to patients and healthcare providers, trust should be grounded in rigorous prediction model validations and model impact studies assessing the model's effectiveness on medical shared decision-making. In this paper, we therefore humbly defend the 'unexplainable' prediction models in healthcare.</p>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"112013"},"PeriodicalIF":5.2000,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"In humble defence of unexplainable black box prediction models in healthcare.\",\"authors\":\"Florien S van Royen, Hilde J P Weerts, Anne A H de Hond, Geert-Jan Geersing, Frans H Rutten, Karel G M Moons, Maarten van Smeden\",\"doi\":\"10.1016/j.jclinepi.2025.112013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The increasing complexity of prediction models for healthcare purposes - whether developed with or without artificial intelligence (AI) techniques - drives the urge to open complex 'black box' models using eXplainable AI (XAI) techniques. In this paper, we argue that XAI may not necessarily provide insights relevant to decision-making in the medical setting and can lead to misplaced trust and misinterpretation of the model's usability. An important limitation of XAI is the difficulty in avoiding causal interpretation, which may result in confirmation bias or false dismissal of the model when explanations conflict with clinical knowledge. Rather than expecting XAI to generate trust in black box prediction models to patients and healthcare providers, trust should be grounded in rigorous prediction model validations and model impact studies assessing the model's effectiveness on medical shared decision-making. In this paper, we therefore humbly defend the 'unexplainable' prediction models in healthcare.</p>\",\"PeriodicalId\":51079,\"journal\":{\"name\":\"Journal of Clinical Epidemiology\",\"volume\":\" \",\"pages\":\"112013\"},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2025-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Clinical Epidemiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1016/j.jclinepi.2025.112013\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Clinical Epidemiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.jclinepi.2025.112013","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
In humble defence of unexplainable black box prediction models in healthcare.
The increasing complexity of prediction models for healthcare purposes - whether developed with or without artificial intelligence (AI) techniques - drives the urge to open complex 'black box' models using eXplainable AI (XAI) techniques. In this paper, we argue that XAI may not necessarily provide insights relevant to decision-making in the medical setting and can lead to misplaced trust and misinterpretation of the model's usability. An important limitation of XAI is the difficulty in avoiding causal interpretation, which may result in confirmation bias or false dismissal of the model when explanations conflict with clinical knowledge. Rather than expecting XAI to generate trust in black box prediction models to patients and healthcare providers, trust should be grounded in rigorous prediction model validations and model impact studies assessing the model's effectiveness on medical shared decision-making. In this paper, we therefore humbly defend the 'unexplainable' prediction models in healthcare.
期刊介绍:
The Journal of Clinical Epidemiology strives to enhance the quality of clinical and patient-oriented healthcare research by advancing and applying innovative methods in conducting, presenting, synthesizing, disseminating, and translating research results into optimal clinical practice. Special emphasis is placed on training new generations of scientists and clinical practice leaders.