提高深度学习模型在预测入院头颅CT血肿扩张中的鲁棒性。

Anh T Tran, Gaby Abou Karam, Dorin Zeevi, Adnan I Qureshi, Ajay Malhotra, Shahram Majidi, Santosh B Murthy, Soojin Park, Despina Kontos, Guido J Falcone, Kevin N Sheth, Seyedmehdi Payabvash
{"title":"提高深度学习模型在预测入院头颅CT血肿扩张中的鲁棒性。","authors":"Anh T Tran, Gaby Abou Karam, Dorin Zeevi, Adnan I Qureshi, Ajay Malhotra, Shahram Majidi, Santosh B Murthy, Soojin Park, Despina Kontos, Guido J Falcone, Kevin N Sheth, Seyedmehdi Payabvash","doi":"10.3174/ajnr.A8650","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and purpose: </strong>Robustness against input data perturbations is essential for deploying deep-learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep-learning models' prediction errors. Testing deep-learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness. In this study, we examined adversarial training and input modifications to improve the robustness of deep-learning models in predicting hematoma expansion (HE) from admission head CTs of patients with acute intracerebral hemorrhage (ICH).</p><p><strong>Materials and methods: </strong>We used a multicenter cohort of n=890 patients for cross-validation/training, and a cohort of n=684 consecutive ICH patients from two stroke centers for independent validation. Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) adversarial attacks were applied for training and testing. We developed and tested four different models to predict ≥3mL, ≥6mL, ≥9mL, and ≥12mL HE in independent validation cohort applying Receiver Operating Characteristics (ROC) Area Under the Curve (AUC). We examined varying mixtures of adversarial and non-perturbed (clean) scans for training as well as including additional input from the hyperparameter-free Otsu multi-threshold segmentation for model.</p><p><strong>Results: </strong>When deep-learning models trained solely on clean scans were tested with PGD and FGSM adversarial images, the average HE prediction AUC dropped from 0.8 to 0.67 and 0.71, respectively. Overall, the best performing strategy to improve model robustness was training with 5-to-3 mix of clean and PGD adversarial scans and addition of Otsu multi-threshold segmentation to model input, increasing the average AUC to 0.77 against both PGD and FGSM adversarial attacks. Adversarial training with FGSM improved robustness against similar type attack but offered limited cross-attack robustness against PGD-type images.</p><p><strong>Conclusions: </strong>Adversarial training and inclusion of threshold-based segmentation as an additional input can improve deep-learning model robustness in prediction of HE from admission head CTs in acute ICH.</p><p><strong>Abbreviations: </strong>ATACH-2= Antihypertensive Treatment of Acute Cerebral Hemorrhage; AUC= Area Under the Curve; Dice=Dice coefficient; CNN= Convolutional Neural Network; FGSM= Fast Gradient Sign Method; ICH= Intracerebral hemorrhage; HD= Hausdorff distance; HE= Hematoma expansion; PGD= Projected Gradient Descent; ROC= Receiver Operating Characteristics; VS= Volume similarity.</p>","PeriodicalId":93863,"journal":{"name":"AJNR. American journal of neuroradiology","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving the Robustness of Deep-Learning Models in Predicting Hematoma Expansion from Admission Head CT.\",\"authors\":\"Anh T Tran, Gaby Abou Karam, Dorin Zeevi, Adnan I Qureshi, Ajay Malhotra, Shahram Majidi, Santosh B Murthy, Soojin Park, Despina Kontos, Guido J Falcone, Kevin N Sheth, Seyedmehdi Payabvash\",\"doi\":\"10.3174/ajnr.A8650\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background and purpose: </strong>Robustness against input data perturbations is essential for deploying deep-learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep-learning models' prediction errors. Testing deep-learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness. In this study, we examined adversarial training and input modifications to improve the robustness of deep-learning models in predicting hematoma expansion (HE) from admission head CTs of patients with acute intracerebral hemorrhage (ICH).</p><p><strong>Materials and methods: </strong>We used a multicenter cohort of n=890 patients for cross-validation/training, and a cohort of n=684 consecutive ICH patients from two stroke centers for independent validation. Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) adversarial attacks were applied for training and testing. We developed and tested four different models to predict ≥3mL, ≥6mL, ≥9mL, and ≥12mL HE in independent validation cohort applying Receiver Operating Characteristics (ROC) Area Under the Curve (AUC). We examined varying mixtures of adversarial and non-perturbed (clean) scans for training as well as including additional input from the hyperparameter-free Otsu multi-threshold segmentation for model.</p><p><strong>Results: </strong>When deep-learning models trained solely on clean scans were tested with PGD and FGSM adversarial images, the average HE prediction AUC dropped from 0.8 to 0.67 and 0.71, respectively. Overall, the best performing strategy to improve model robustness was training with 5-to-3 mix of clean and PGD adversarial scans and addition of Otsu multi-threshold segmentation to model input, increasing the average AUC to 0.77 against both PGD and FGSM adversarial attacks. Adversarial training with FGSM improved robustness against similar type attack but offered limited cross-attack robustness against PGD-type images.</p><p><strong>Conclusions: </strong>Adversarial training and inclusion of threshold-based segmentation as an additional input can improve deep-learning model robustness in prediction of HE from admission head CTs in acute ICH.</p><p><strong>Abbreviations: </strong>ATACH-2= Antihypertensive Treatment of Acute Cerebral Hemorrhage; AUC= Area Under the Curve; Dice=Dice coefficient; CNN= Convolutional Neural Network; FGSM= Fast Gradient Sign Method; ICH= Intracerebral hemorrhage; HD= Hausdorff distance; HE= Hematoma expansion; PGD= Projected Gradient Descent; ROC= Receiver Operating Characteristics; VS= Volume similarity.</p>\",\"PeriodicalId\":93863,\"journal\":{\"name\":\"AJNR. American journal of neuroradiology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AJNR. American journal of neuroradiology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3174/ajnr.A8650\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AJNR. American journal of neuroradiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3174/ajnr.A8650","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景和目的:对输入数据扰动的鲁棒性对于在临床实践中部署深度学习模型至关重要。对抗性攻击涉及对扫描进行微妙的体素级操作,以增加深度学习模型的预测误差。在对抗图像的示例上测试深度学习模型的性能提供了鲁棒性的度量,并且在训练集中包含对抗图像可以提高模型的鲁棒性。在这项研究中,我们研究了对抗性训练和输入修改,以提高深度学习模型在预测急性脑出血(ICH)患者入院头部ct血肿扩张(HE)方面的稳健性。材料和方法:我们使用了一个包含n=890例患者的多中心队列进行交叉验证/训练,并使用了一个来自两个卒中中心的n=684例连续脑出血患者的队列进行独立验证。采用快速梯度符号法(FGSM)和投影梯度下降法(PGD)对抗性攻击进行训练和测试。我们开发并测试了四种不同的模型来预测≥3mL,≥6mL,≥9mL和≥12mL HE在独立验证队列中应用受试者工作特征(ROC)曲线下面积(AUC)。我们检查了不同的对抗性和非扰动(干净)扫描的混合训练,以及包括来自超参数无Otsu多阈值分割模型的额外输入。结果:当深度学习模型仅在干净扫描上训练时,使用PGD和FGSM对抗图像进行测试,平均HE预测AUC分别从0.8下降到0.67和0.71。总体而言,提高模型鲁棒性的最佳策略是使用5比3的干净和PGD对抗性扫描混合训练,并在模型输入中添加Otsu多阈值分割,将针对PGD和FGSM对抗性攻击的平均AUC提高到0.77。FGSM的对抗性训练提高了对相似类型攻击的鲁棒性,但对pgd类型图像的交叉攻击鲁棒性有限。结论:对抗性训练和包含基于阈值的分割作为额外输入可以提高深度学习模型在预测急性脑出血入院头部ct中HE的稳健性。ATACH-2=急性脑出血降压治疗;AUC=曲线下面积;骰子=骰子系数;卷积神经网络;快速梯度符号法;脑出血;HD=豪斯多夫距离;HE=血肿扩张;投影梯度下降;ROC=受试者工作特征;VS=体积相似度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improving the Robustness of Deep-Learning Models in Predicting Hematoma Expansion from Admission Head CT.

Background and purpose: Robustness against input data perturbations is essential for deploying deep-learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep-learning models' prediction errors. Testing deep-learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness. In this study, we examined adversarial training and input modifications to improve the robustness of deep-learning models in predicting hematoma expansion (HE) from admission head CTs of patients with acute intracerebral hemorrhage (ICH).

Materials and methods: We used a multicenter cohort of n=890 patients for cross-validation/training, and a cohort of n=684 consecutive ICH patients from two stroke centers for independent validation. Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) adversarial attacks were applied for training and testing. We developed and tested four different models to predict ≥3mL, ≥6mL, ≥9mL, and ≥12mL HE in independent validation cohort applying Receiver Operating Characteristics (ROC) Area Under the Curve (AUC). We examined varying mixtures of adversarial and non-perturbed (clean) scans for training as well as including additional input from the hyperparameter-free Otsu multi-threshold segmentation for model.

Results: When deep-learning models trained solely on clean scans were tested with PGD and FGSM adversarial images, the average HE prediction AUC dropped from 0.8 to 0.67 and 0.71, respectively. Overall, the best performing strategy to improve model robustness was training with 5-to-3 mix of clean and PGD adversarial scans and addition of Otsu multi-threshold segmentation to model input, increasing the average AUC to 0.77 against both PGD and FGSM adversarial attacks. Adversarial training with FGSM improved robustness against similar type attack but offered limited cross-attack robustness against PGD-type images.

Conclusions: Adversarial training and inclusion of threshold-based segmentation as an additional input can improve deep-learning model robustness in prediction of HE from admission head CTs in acute ICH.

Abbreviations: ATACH-2= Antihypertensive Treatment of Acute Cerebral Hemorrhage; AUC= Area Under the Curve; Dice=Dice coefficient; CNN= Convolutional Neural Network; FGSM= Fast Gradient Sign Method; ICH= Intracerebral hemorrhage; HD= Hausdorff distance; HE= Hematoma expansion; PGD= Projected Gradient Descent; ROC= Receiver Operating Characteristics; VS= Volume similarity.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信