Parmita Mondal, Mohammad Mahdi Shiraz Bhurwani, Swetadri Vasan Setlur Nagesh, Pui Man Rosalind Lai, Jason M Davies, Elad I Levy, Kunal Vakharia, Michael Levitt, Adnan H Siddiqui, Ciprian N Ionita
{"title":"在血流分流治疗的动脉瘤中,最小化定量血管造影中人为引起的变异性,以实现稳健且可解释的基于人工智能的闭塞预测。","authors":"Parmita Mondal, Mohammad Mahdi Shiraz Bhurwani, Swetadri Vasan Setlur Nagesh, Pui Man Rosalind Lai, Jason M Davies, Elad I Levy, Kunal Vakharia, Michael Levitt, Adnan H Siddiqui, Ciprian N Ionita","doi":"10.1136/jnis-2025-023416","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Bias from contrast injection variability is a significant obstacle to accurate intracranial aneurysm (IA) occlusion prediction using quantitative angiography (QA) and deep neural networks (DNNs). This study explores bias removal and explainable AI (XAI) for outcome prediction.</p><p><strong>Objective: </strong>To implement an injection bias removal algorithm for reducing QA variability and examine the impact of XAI on the reliability and interpretability of deep learning models for occlusion prediction in flow diverter-treated aneurysms.</p><p><strong>Methods: </strong>This study used angiograms from 458 patients with flow diverter-treated IAs, with 6-month follow-up defining occlusion status. We minimized injection variability by deconvolving the parent artery input to isolate the aneurysm's impulse response, then reconvolving it with a standardized injection curve. A DNN trained on these QA-derived biomarkers predicted 6-month occlusion. Local Interpretable Model-Agnostic Explanations (LIME) identified the key imaging features influencing the model, ensuring transparency and clinical relevance.</p><p><strong>Results: </strong>The DNN trained with uncorrected QA parameters achieved a mean area under the receiver operating characteristic curve (AUROC) of 0.60±0.05 and an accuracy of 0.58±0.03. After correcting for injection bias by deconvolving the parent artery input and reconvolving it with a standardized injection curve, the DNN's AUCROC increased to 0.79±0.02 and accuracy to 0.73±0.01. Sensitivity and specificity were 67.61±1.93% and 76.19±1.12%, respectively. LIME plots were added for each prediction to enhance interpretability.</p><p><strong>Conclusions: </strong>Standardizing QA parameters via injection bias correction improves occlusion prediction accuracy for flow diverter-treated IAs. Adding explainable AI (eg, LIME) clarifies model decisions, demonstrating the feasibility of clinically interpretable AI-based outcome prediction.</p>","PeriodicalId":16411,"journal":{"name":"Journal of NeuroInterventional Surgery","volume":" ","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Minimizing human-induced variability in quantitative angiography for a robust and explainable AI-based occlusion prediction in flow diverter-treated aneurysms.\",\"authors\":\"Parmita Mondal, Mohammad Mahdi Shiraz Bhurwani, Swetadri Vasan Setlur Nagesh, Pui Man Rosalind Lai, Jason M Davies, Elad I Levy, Kunal Vakharia, Michael Levitt, Adnan H Siddiqui, Ciprian N Ionita\",\"doi\":\"10.1136/jnis-2025-023416\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Bias from contrast injection variability is a significant obstacle to accurate intracranial aneurysm (IA) occlusion prediction using quantitative angiography (QA) and deep neural networks (DNNs). This study explores bias removal and explainable AI (XAI) for outcome prediction.</p><p><strong>Objective: </strong>To implement an injection bias removal algorithm for reducing QA variability and examine the impact of XAI on the reliability and interpretability of deep learning models for occlusion prediction in flow diverter-treated aneurysms.</p><p><strong>Methods: </strong>This study used angiograms from 458 patients with flow diverter-treated IAs, with 6-month follow-up defining occlusion status. We minimized injection variability by deconvolving the parent artery input to isolate the aneurysm's impulse response, then reconvolving it with a standardized injection curve. A DNN trained on these QA-derived biomarkers predicted 6-month occlusion. Local Interpretable Model-Agnostic Explanations (LIME) identified the key imaging features influencing the model, ensuring transparency and clinical relevance.</p><p><strong>Results: </strong>The DNN trained with uncorrected QA parameters achieved a mean area under the receiver operating characteristic curve (AUROC) of 0.60±0.05 and an accuracy of 0.58±0.03. After correcting for injection bias by deconvolving the parent artery input and reconvolving it with a standardized injection curve, the DNN's AUCROC increased to 0.79±0.02 and accuracy to 0.73±0.01. Sensitivity and specificity were 67.61±1.93% and 76.19±1.12%, respectively. LIME plots were added for each prediction to enhance interpretability.</p><p><strong>Conclusions: </strong>Standardizing QA parameters via injection bias correction improves occlusion prediction accuracy for flow diverter-treated IAs. Adding explainable AI (eg, LIME) clarifies model decisions, demonstrating the feasibility of clinically interpretable AI-based outcome prediction.</p>\",\"PeriodicalId\":16411,\"journal\":{\"name\":\"Journal of NeuroInterventional Surgery\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of NeuroInterventional Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1136/jnis-2025-023416\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"NEUROIMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of NeuroInterventional Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1136/jnis-2025-023416","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NEUROIMAGING","Score":null,"Total":0}
Minimizing human-induced variability in quantitative angiography for a robust and explainable AI-based occlusion prediction in flow diverter-treated aneurysms.
Background: Bias from contrast injection variability is a significant obstacle to accurate intracranial aneurysm (IA) occlusion prediction using quantitative angiography (QA) and deep neural networks (DNNs). This study explores bias removal and explainable AI (XAI) for outcome prediction.
Objective: To implement an injection bias removal algorithm for reducing QA variability and examine the impact of XAI on the reliability and interpretability of deep learning models for occlusion prediction in flow diverter-treated aneurysms.
Methods: This study used angiograms from 458 patients with flow diverter-treated IAs, with 6-month follow-up defining occlusion status. We minimized injection variability by deconvolving the parent artery input to isolate the aneurysm's impulse response, then reconvolving it with a standardized injection curve. A DNN trained on these QA-derived biomarkers predicted 6-month occlusion. Local Interpretable Model-Agnostic Explanations (LIME) identified the key imaging features influencing the model, ensuring transparency and clinical relevance.
Results: The DNN trained with uncorrected QA parameters achieved a mean area under the receiver operating characteristic curve (AUROC) of 0.60±0.05 and an accuracy of 0.58±0.03. After correcting for injection bias by deconvolving the parent artery input and reconvolving it with a standardized injection curve, the DNN's AUCROC increased to 0.79±0.02 and accuracy to 0.73±0.01. Sensitivity and specificity were 67.61±1.93% and 76.19±1.12%, respectively. LIME plots were added for each prediction to enhance interpretability.
Conclusions: Standardizing QA parameters via injection bias correction improves occlusion prediction accuracy for flow diverter-treated IAs. Adding explainable AI (eg, LIME) clarifies model decisions, demonstrating the feasibility of clinically interpretable AI-based outcome prediction.
期刊介绍:
The Journal of NeuroInterventional Surgery (JNIS) is a leading peer review journal for scientific research and literature pertaining to the field of neurointerventional surgery. The journal launch follows growing professional interest in neurointerventional techniques for the treatment of a range of neurological and vascular problems including stroke, aneurysms, brain tumors, and spinal compression.The journal is owned by SNIS and is also the official journal of the Interventional Chapter of the Australian and New Zealand Society of Neuroradiology (ANZSNR), the Canadian Interventional Neuro Group, the Hong Kong Neurological Society (HKNS) and the Neuroradiological Society of Taiwan.