Markus Plass, Michaela Kargl, Tim-Rasmus Kiehl, Peter Regitnig, Christian Geißler, Theodore Evans, Norman Zerbe, Rita Carvalho, Andreas Holzinger, Heimo Müller
{"title":"数字病理学的可解释性和因果性","authors":"Markus Plass, Michaela Kargl, Tim-Rasmus Kiehl, Peter Regitnig, Christian Geißler, Theodore Evans, Norman Zerbe, Rita Carvalho, Andreas Holzinger, Heimo Müller","doi":"10.1002/cjp2.322","DOIUrl":null,"url":null,"abstract":"<p>The current move towards digital pathology enables pathologists to use artificial intelligence (AI)-based computer programmes for the advanced analysis of whole slide images. However, currently, the best-performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black-box machine-learning systems more transparent. These XAI methods are a first step towards making black-box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what-if’-questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human-in-the-loop and bringing medical experts' experience and conceptual knowledge to AI processes.</p>","PeriodicalId":48612,"journal":{"name":"Journal of Pathology Clinical Research","volume":"9 4","pages":"251-260"},"PeriodicalIF":3.4000,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://pathsocjournals.onlinelibrary.wiley.com/doi/epdf/10.1002/cjp2.322","citationCount":"4","resultStr":"{\"title\":\"Explainability and causability in digital pathology\",\"authors\":\"Markus Plass, Michaela Kargl, Tim-Rasmus Kiehl, Peter Regitnig, Christian Geißler, Theodore Evans, Norman Zerbe, Rita Carvalho, Andreas Holzinger, Heimo Müller\",\"doi\":\"10.1002/cjp2.322\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The current move towards digital pathology enables pathologists to use artificial intelligence (AI)-based computer programmes for the advanced analysis of whole slide images. However, currently, the best-performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black-box machine-learning systems more transparent. These XAI methods are a first step towards making black-box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what-if’-questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human-in-the-loop and bringing medical experts' experience and conceptual knowledge to AI processes.</p>\",\"PeriodicalId\":48612,\"journal\":{\"name\":\"Journal of Pathology Clinical Research\",\"volume\":\"9 4\",\"pages\":\"251-260\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2023-04-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://pathsocjournals.onlinelibrary.wiley.com/doi/epdf/10.1002/cjp2.322\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Pathology Clinical Research\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cjp2.322\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PATHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pathology Clinical Research","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cjp2.322","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PATHOLOGY","Score":null,"Total":0}
Explainability and causability in digital pathology
The current move towards digital pathology enables pathologists to use artificial intelligence (AI)-based computer programmes for the advanced analysis of whole slide images. However, currently, the best-performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black-box machine-learning systems more transparent. These XAI methods are a first step towards making black-box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what-if’-questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human-in-the-loop and bringing medical experts' experience and conceptual knowledge to AI processes.
期刊介绍:
The Journal of Pathology: Clinical Research and The Journal of Pathology serve as translational bridges between basic biomedical science and clinical medicine with particular emphasis on, but not restricted to, tissue based studies.
The focus of The Journal of Pathology: Clinical Research is the publication of studies that illuminate the clinical relevance of research in the broad area of the study of disease. Appropriately powered and validated studies with novel diagnostic, prognostic and predictive significance, and biomarker discover and validation, will be welcomed. Studies with a predominantly mechanistic basis will be more appropriate for the companion Journal of Pathology.