Nicoletta Prentzas, Chara S Skouteli, Efthyvoulos Kyriacou, Maura Griffin, Andrew Nicolaides, Christos P Loizou, Antonis Kakas, Constantinos S Pattichis
{"title":"基于超声图像特征的高危无症状颈动脉斑块的可解释人工智能预测","authors":"Nicoletta Prentzas, Chara S Skouteli, Efthyvoulos Kyriacou, Maura Griffin, Andrew Nicolaides, Christos P Loizou, Antonis Kakas, Constantinos S Pattichis","doi":"10.23736/S0392-9590.25.05449-5","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The addition of ultrasonic plaque texture features to clinical features in patients with asymptomatic internal carotid artery stenosis (ACS) improved the ability of a Support Vector Machine (SVM) model to identify plaques that are likely to produce stroke. However, SVM like many Artificial Intelligence (AI) black-box models lack transparency, limiting their adoption in critical settings. Explainable AI (XAI) techniques offer potential solutions by making model decision more interpretable. This study investigates whether incorporating XAI techniques can improve interpretability without significantly compromising predictive accuracy in stroke risk assessment.</p><p><strong>Methods: </strong>We developed an Argumentation-based Explainable Machine Learning (ArgEML) methodology and framework for explainable machine learning predictions via argumentation. We used this framework to learn explainable argumentation theories from a real-life dataset of patients with asymptomatic carotid stenosis. We assessed the performance of these theories using standard machine learning (ML) metrics, while interpretability was evaluated through model transparency and quality of explanations.</p><p><strong>Results: </strong>Results indicate that the ArgEML models maintain high predictive accuracy while significantly improving the interpretability of the predictions. Moreover, undecided predictions are addressed as dilemmas which still offer valuable information through the explanations of the different prediction capabilities.</p><p><strong>Conclusions: </strong>Our findings suggest that ArgEML enhances the interpretability of stroke prediction from real life medical data without sacrificing predictive performance. Moreover, explanations offer valuable insights into misclassified cases and cases where a definite prediction cannot be derived. This transparency can help refine a model, guiding clinical decisions and improving AI adoption in healthcare.</p>","PeriodicalId":13709,"journal":{"name":"International Angiology","volume":"44 4","pages":"291-300"},"PeriodicalIF":1.4000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable artificial intelligence in the prediction of high-risk asymptomatic carotid plaques based on ultrasonic image features.\",\"authors\":\"Nicoletta Prentzas, Chara S Skouteli, Efthyvoulos Kyriacou, Maura Griffin, Andrew Nicolaides, Christos P Loizou, Antonis Kakas, Constantinos S Pattichis\",\"doi\":\"10.23736/S0392-9590.25.05449-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>The addition of ultrasonic plaque texture features to clinical features in patients with asymptomatic internal carotid artery stenosis (ACS) improved the ability of a Support Vector Machine (SVM) model to identify plaques that are likely to produce stroke. However, SVM like many Artificial Intelligence (AI) black-box models lack transparency, limiting their adoption in critical settings. Explainable AI (XAI) techniques offer potential solutions by making model decision more interpretable. This study investigates whether incorporating XAI techniques can improve interpretability without significantly compromising predictive accuracy in stroke risk assessment.</p><p><strong>Methods: </strong>We developed an Argumentation-based Explainable Machine Learning (ArgEML) methodology and framework for explainable machine learning predictions via argumentation. We used this framework to learn explainable argumentation theories from a real-life dataset of patients with asymptomatic carotid stenosis. We assessed the performance of these theories using standard machine learning (ML) metrics, while interpretability was evaluated through model transparency and quality of explanations.</p><p><strong>Results: </strong>Results indicate that the ArgEML models maintain high predictive accuracy while significantly improving the interpretability of the predictions. Moreover, undecided predictions are addressed as dilemmas which still offer valuable information through the explanations of the different prediction capabilities.</p><p><strong>Conclusions: </strong>Our findings suggest that ArgEML enhances the interpretability of stroke prediction from real life medical data without sacrificing predictive performance. Moreover, explanations offer valuable insights into misclassified cases and cases where a definite prediction cannot be derived. This transparency can help refine a model, guiding clinical decisions and improving AI adoption in healthcare.</p>\",\"PeriodicalId\":13709,\"journal\":{\"name\":\"International Angiology\",\"volume\":\"44 4\",\"pages\":\"291-300\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2025-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Angiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.23736/S0392-9590.25.05449-5\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PERIPHERAL VASCULAR DISEASE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Angiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.23736/S0392-9590.25.05449-5","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PERIPHERAL VASCULAR DISEASE","Score":null,"Total":0}
Explainable artificial intelligence in the prediction of high-risk asymptomatic carotid plaques based on ultrasonic image features.
Background: The addition of ultrasonic plaque texture features to clinical features in patients with asymptomatic internal carotid artery stenosis (ACS) improved the ability of a Support Vector Machine (SVM) model to identify plaques that are likely to produce stroke. However, SVM like many Artificial Intelligence (AI) black-box models lack transparency, limiting their adoption in critical settings. Explainable AI (XAI) techniques offer potential solutions by making model decision more interpretable. This study investigates whether incorporating XAI techniques can improve interpretability without significantly compromising predictive accuracy in stroke risk assessment.
Methods: We developed an Argumentation-based Explainable Machine Learning (ArgEML) methodology and framework for explainable machine learning predictions via argumentation. We used this framework to learn explainable argumentation theories from a real-life dataset of patients with asymptomatic carotid stenosis. We assessed the performance of these theories using standard machine learning (ML) metrics, while interpretability was evaluated through model transparency and quality of explanations.
Results: Results indicate that the ArgEML models maintain high predictive accuracy while significantly improving the interpretability of the predictions. Moreover, undecided predictions are addressed as dilemmas which still offer valuable information through the explanations of the different prediction capabilities.
Conclusions: Our findings suggest that ArgEML enhances the interpretability of stroke prediction from real life medical data without sacrificing predictive performance. Moreover, explanations offer valuable insights into misclassified cases and cases where a definite prediction cannot be derived. This transparency can help refine a model, guiding clinical decisions and improving AI adoption in healthcare.
期刊介绍:
International Angiology publishes scientific papers on angiology. Manuscripts may be submitted in the form of editorials, original articles, review articles, special articles, letters to the Editor and guidelines. The journal aims to provide its readers with papers of the highest quality and impact through a process of careful peer review and editorial work. Duties and responsibilities of all the subjects involved in the editorial process are summarized at Publication ethics. Manuscripts are expected to comply with the instructions to authors which conform to the Uniform Requirements for Manuscripts Submitted to Biomedical Editors by the International Committee of Medical Journal Editors (ICMJE).