Nicoletta Prentzas, Marios Pitsiali, E. Kyriacou, Andrew N. Nicolaides, A. Kakas, C. Pattichis
{"title":"Model Agnostic Explainability Techniques in Ultrasound Image Analysis","authors":"Nicoletta Prentzas, Marios Pitsiali, E. Kyriacou, Andrew N. Nicolaides, A. Kakas, C. Pattichis","doi":"10.1109/BIBE52308.2021.9635199","DOIUrl":null,"url":null,"abstract":"The current adoption of Medical Artificial Intelligence (AI) solutions in clinical practice suggest that despite its undeniable potential AI is not achieving this potential. A major barrier to its adoption is the lack of transparency and interpretability, and the inability of the system to explain its results. Explainable AI (XAI) is an emerging field in AI that aims to address these barriers, with the development of new or modified algorithms to enable transparency, provide explanations in a way that humans can understand and foster trust. Numerous XAI techniques have been proposed in the literature, commonly classified as model-agnostic or model-specific. In this study, we examine the application of four model-agnostic XAI techniques (LIME, SHAP, ANCHORS, inTrees) to an XGBoost classifier trained on real-life medical data for the prediction of high-risk asymptomatic carotid plaques based on ultrasound image analysis. We present and compare local explanations for selected observations in the test set. We also present global explanations generated from these techniques that explain the behavior of the entire model. Additionally, we assess the quality of the explanations, using suggested properties in the literature. Finally, we discuss the results of this comparative study and suggest directions for future work.","PeriodicalId":343724,"journal":{"name":"2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIBE52308.2021.9635199","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
The current adoption of Medical Artificial Intelligence (AI) solutions in clinical practice suggest that despite its undeniable potential AI is not achieving this potential. A major barrier to its adoption is the lack of transparency and interpretability, and the inability of the system to explain its results. Explainable AI (XAI) is an emerging field in AI that aims to address these barriers, with the development of new or modified algorithms to enable transparency, provide explanations in a way that humans can understand and foster trust. Numerous XAI techniques have been proposed in the literature, commonly classified as model-agnostic or model-specific. In this study, we examine the application of four model-agnostic XAI techniques (LIME, SHAP, ANCHORS, inTrees) to an XGBoost classifier trained on real-life medical data for the prediction of high-risk asymptomatic carotid plaques based on ultrasound image analysis. We present and compare local explanations for selected observations in the test set. We also present global explanations generated from these techniques that explain the behavior of the entire model. Additionally, we assess the quality of the explanations, using suggested properties in the literature. Finally, we discuss the results of this comparative study and suggest directions for future work.