{"title":"可解释医疗人工智能的优点。","authors":"Joshua Hatherley, Robert Sparrow, Mark Howard","doi":"10.1017/S0963180122000664","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are \"black boxes.\" The initial response in the literature was a demand for \"explainable AI.\" However, recently, several authors have suggested that making AI more explainable or \"interpretable\" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a \"lethal prejudice.\" In this paper, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"323-332"},"PeriodicalIF":1.5000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Virtues of Interpretable Medical AI.\",\"authors\":\"Joshua Hatherley, Robert Sparrow, Mark Howard\",\"doi\":\"10.1017/S0963180122000664\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are \\\"black boxes.\\\" The initial response in the literature was a demand for \\\"explainable AI.\\\" However, recently, several authors have suggested that making AI more explainable or \\\"interpretable\\\" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a \\\"lethal prejudice.\\\" In this paper, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.</p>\",\"PeriodicalId\":55300,\"journal\":{\"name\":\"Cambridge Quarterly of Healthcare Ethics\",\"volume\":\" \",\"pages\":\"323-332\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cambridge Quarterly of Healthcare Ethics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1017/S0963180122000664\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/1/10 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cambridge Quarterly of Healthcare Ethics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1017/S0963180122000664","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/10 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are "black boxes." The initial response in the literature was a demand for "explainable AI." However, recently, several authors have suggested that making AI more explainable or "interpretable" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a "lethal prejudice." In this paper, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.
期刊介绍:
The Cambridge Quarterly of Healthcare Ethics is designed to address the challenges of biology, medicine and healthcare and to meet the needs of professionals serving on healthcare ethics committees in hospitals, nursing homes, hospices and rehabilitation centres. The aim of the journal is to serve as the international forum for the wide range of serious and urgent issues faced by members of healthcare ethics committees, physicians, nurses, social workers, clergy, lawyers and community representatives.