基于超声图像特征的高危无症状颈动脉斑块的可解释人工智能预测

IF 1.4 4区 医学 Q3 PERIPHERAL VASCULAR DISEASE
Nicoletta Prentzas, Chara S Skouteli, Efthyvoulos Kyriacou, Maura Griffin, Andrew Nicolaides, Christos P Loizou, Antonis Kakas, Constantinos S Pattichis
{"title":"基于超声图像特征的高危无症状颈动脉斑块的可解释人工智能预测","authors":"Nicoletta Prentzas, Chara S Skouteli, Efthyvoulos Kyriacou, Maura Griffin, Andrew Nicolaides, Christos P Loizou, Antonis Kakas, Constantinos S Pattichis","doi":"10.23736/S0392-9590.25.05449-5","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The addition of ultrasonic plaque texture features to clinical features in patients with asymptomatic internal carotid artery stenosis (ACS) improved the ability of a Support Vector Machine (SVM) model to identify plaques that are likely to produce stroke. However, SVM like many Artificial Intelligence (AI) black-box models lack transparency, limiting their adoption in critical settings. Explainable AI (XAI) techniques offer potential solutions by making model decision more interpretable. This study investigates whether incorporating XAI techniques can improve interpretability without significantly compromising predictive accuracy in stroke risk assessment.</p><p><strong>Methods: </strong>We developed an Argumentation-based Explainable Machine Learning (ArgEML) methodology and framework for explainable machine learning predictions via argumentation. We used this framework to learn explainable argumentation theories from a real-life dataset of patients with asymptomatic carotid stenosis. We assessed the performance of these theories using standard machine learning (ML) metrics, while interpretability was evaluated through model transparency and quality of explanations.</p><p><strong>Results: </strong>Results indicate that the ArgEML models maintain high predictive accuracy while significantly improving the interpretability of the predictions. Moreover, undecided predictions are addressed as dilemmas which still offer valuable information through the explanations of the different prediction capabilities.</p><p><strong>Conclusions: </strong>Our findings suggest that ArgEML enhances the interpretability of stroke prediction from real life medical data without sacrificing predictive performance. Moreover, explanations offer valuable insights into misclassified cases and cases where a definite prediction cannot be derived. This transparency can help refine a model, guiding clinical decisions and improving AI adoption in healthcare.</p>","PeriodicalId":13709,"journal":{"name":"International Angiology","volume":"44 4","pages":"291-300"},"PeriodicalIF":1.4000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable artificial intelligence in the prediction of high-risk asymptomatic carotid plaques based on ultrasonic image features.\",\"authors\":\"Nicoletta Prentzas, Chara S Skouteli, Efthyvoulos Kyriacou, Maura Griffin, Andrew Nicolaides, Christos P Loizou, Antonis Kakas, Constantinos S Pattichis\",\"doi\":\"10.23736/S0392-9590.25.05449-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>The addition of ultrasonic plaque texture features to clinical features in patients with asymptomatic internal carotid artery stenosis (ACS) improved the ability of a Support Vector Machine (SVM) model to identify plaques that are likely to produce stroke. However, SVM like many Artificial Intelligence (AI) black-box models lack transparency, limiting their adoption in critical settings. Explainable AI (XAI) techniques offer potential solutions by making model decision more interpretable. This study investigates whether incorporating XAI techniques can improve interpretability without significantly compromising predictive accuracy in stroke risk assessment.</p><p><strong>Methods: </strong>We developed an Argumentation-based Explainable Machine Learning (ArgEML) methodology and framework for explainable machine learning predictions via argumentation. We used this framework to learn explainable argumentation theories from a real-life dataset of patients with asymptomatic carotid stenosis. We assessed the performance of these theories using standard machine learning (ML) metrics, while interpretability was evaluated through model transparency and quality of explanations.</p><p><strong>Results: </strong>Results indicate that the ArgEML models maintain high predictive accuracy while significantly improving the interpretability of the predictions. Moreover, undecided predictions are addressed as dilemmas which still offer valuable information through the explanations of the different prediction capabilities.</p><p><strong>Conclusions: </strong>Our findings suggest that ArgEML enhances the interpretability of stroke prediction from real life medical data without sacrificing predictive performance. Moreover, explanations offer valuable insights into misclassified cases and cases where a definite prediction cannot be derived. This transparency can help refine a model, guiding clinical decisions and improving AI adoption in healthcare.</p>\",\"PeriodicalId\":13709,\"journal\":{\"name\":\"International Angiology\",\"volume\":\"44 4\",\"pages\":\"291-300\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2025-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Angiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.23736/S0392-9590.25.05449-5\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PERIPHERAL VASCULAR DISEASE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Angiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.23736/S0392-9590.25.05449-5","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PERIPHERAL VASCULAR DISEASE","Score":null,"Total":0}
引用次数: 0

摘要

背景:在无症状颈内动脉狭窄(ACS)患者的临床特征中加入超声斑块纹理特征,提高了支持向量机(SVM)模型识别可能产生脑卒中斑块的能力。然而,像许多人工智能(AI)黑箱模型一样,支持向量机缺乏透明度,限制了它们在关键环境中的采用。可解释AI (XAI)技术通过使模型决策更具可解释性提供了潜在的解决方案。本研究探讨了合并XAI技术是否可以在不显著影响卒中风险评估预测准确性的情况下提高可解释性。方法:我们开发了一种基于论证的可解释机器学习(ArgEML)方法和框架,用于通过论证进行可解释的机器学习预测。我们使用这个框架从无症状颈动脉狭窄患者的真实数据集中学习可解释的论证理论。我们使用标准机器学习(ML)指标评估这些理论的性能,而通过模型透明度和解释质量评估可解释性。结果:ArgEML模型在保持较高预测精度的同时,显著提高了预测结果的可解释性。此外,未确定的预测被处理为困境,仍然通过解释不同的预测能力提供有价值的信息。结论:我们的研究结果表明,ArgEML在不牺牲预测性能的情况下,增强了现实生活医疗数据对脑卒中预测的可解释性。此外,解释为错误分类的案例和无法得出明确预测的案例提供了有价值的见解。这种透明度有助于完善模型,指导临床决策,并提高医疗保健领域对人工智能的采用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Explainable artificial intelligence in the prediction of high-risk asymptomatic carotid plaques based on ultrasonic image features.

Background: The addition of ultrasonic plaque texture features to clinical features in patients with asymptomatic internal carotid artery stenosis (ACS) improved the ability of a Support Vector Machine (SVM) model to identify plaques that are likely to produce stroke. However, SVM like many Artificial Intelligence (AI) black-box models lack transparency, limiting their adoption in critical settings. Explainable AI (XAI) techniques offer potential solutions by making model decision more interpretable. This study investigates whether incorporating XAI techniques can improve interpretability without significantly compromising predictive accuracy in stroke risk assessment.

Methods: We developed an Argumentation-based Explainable Machine Learning (ArgEML) methodology and framework for explainable machine learning predictions via argumentation. We used this framework to learn explainable argumentation theories from a real-life dataset of patients with asymptomatic carotid stenosis. We assessed the performance of these theories using standard machine learning (ML) metrics, while interpretability was evaluated through model transparency and quality of explanations.

Results: Results indicate that the ArgEML models maintain high predictive accuracy while significantly improving the interpretability of the predictions. Moreover, undecided predictions are addressed as dilemmas which still offer valuable information through the explanations of the different prediction capabilities.

Conclusions: Our findings suggest that ArgEML enhances the interpretability of stroke prediction from real life medical data without sacrificing predictive performance. Moreover, explanations offer valuable insights into misclassified cases and cases where a definite prediction cannot be derived. This transparency can help refine a model, guiding clinical decisions and improving AI adoption in healthcare.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Angiology
International Angiology 医学-外周血管病
CiteScore
2.80
自引率
28.60%
发文量
89
审稿时长
6-12 weeks
期刊介绍: International Angiology publishes scientific papers on angiology. Manuscripts may be submitted in the form of editorials, original articles, review articles, special articles, letters to the Editor and guidelines. The journal aims to provide its readers with papers of the highest quality and impact through a process of careful peer review and editorial work. Duties and responsibilities of all the subjects involved in the editorial process are summarized at Publication ethics. Manuscripts are expected to comply with the instructions to authors which conform to the Uniform Requirements for Manuscripts Submitted to Biomedical Editors by the International Committee of Medical Journal Editors (ICMJE).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信