A general approach to improve adversarial robustness of DNNs for medical image segmentation and detection.

Linhai Ma, Jiasong Chen, Linchen Qian, Liang Liang
{"title":"A general approach to improve adversarial robustness of DNNs for medical image segmentation and detection.","authors":"Linhai Ma, Jiasong Chen, Linchen Qian, Liang Liang","doi":"10.1117/12.3006534","DOIUrl":null,"url":null,"abstract":"<p><p>It is known that deep neural networks (DNNs) are vulnerable to adversarial noises. Improving adversarial robustness of DNNs is essential. This is not only because unperceivable adversarial noise is a threat to the performance of DNNs models, but also adversarially robust DNNs have a strong resistance to the white noises that may present everywhere in the actual world. To improve adversarial robustness of DNNs, a variety of adversarial training methods have been proposed. Most of the previous methods are designed under one single application scenario: image classification. However, image segmentation, landmark detection, and object detection are more commonly observed than classifying the entire images in the medical imaging field. Although classification tasks and other tasks (e.g., regression) share some similarities, they also differ in certain ways, e.g., some adversarial training methods use misclassification criteria, which is well-defined in classification but not in regression. These restrictions/limitations hinder application of adversarial training for many medical imaging analysis tasks. In our work, the contributions are as follows: (1) We investigated the existing adversarial training methods and discovered the challenges that make those methods unsuitable for adaptation in segmentation and detection tasks. (2) We modified and adapted some existing adversarial training methods for medical image segmentation and detection tasks. (3) We proposed a general adversarial training method for medical image segmentation and detection. (4) We implemented our method in diverse medical imaging tasks using publicly available datasets, including MRI segmentation, Cephalometric landmark detection, and blood cell detection. The experiments substantiated the effectiveness of our method.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11491114/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of SPIE--the International Society for Optical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3006534","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/4/2 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

It is known that deep neural networks (DNNs) are vulnerable to adversarial noises. Improving adversarial robustness of DNNs is essential. This is not only because unperceivable adversarial noise is a threat to the performance of DNNs models, but also adversarially robust DNNs have a strong resistance to the white noises that may present everywhere in the actual world. To improve adversarial robustness of DNNs, a variety of adversarial training methods have been proposed. Most of the previous methods are designed under one single application scenario: image classification. However, image segmentation, landmark detection, and object detection are more commonly observed than classifying the entire images in the medical imaging field. Although classification tasks and other tasks (e.g., regression) share some similarities, they also differ in certain ways, e.g., some adversarial training methods use misclassification criteria, which is well-defined in classification but not in regression. These restrictions/limitations hinder application of adversarial training for many medical imaging analysis tasks. In our work, the contributions are as follows: (1) We investigated the existing adversarial training methods and discovered the challenges that make those methods unsuitable for adaptation in segmentation and detection tasks. (2) We modified and adapted some existing adversarial training methods for medical image segmentation and detection tasks. (3) We proposed a general adversarial training method for medical image segmentation and detection. (4) We implemented our method in diverse medical imaging tasks using publicly available datasets, including MRI segmentation, Cephalometric landmark detection, and blood cell detection. The experiments substantiated the effectiveness of our method.

提高用于医学图像分割和检测的 DNN 抗对抗鲁棒性的通用方法。
众所周知,深度神经网络(DNN)很容易受到对抗性噪声的影响。提高 DNN 的对抗鲁棒性至关重要。这不仅是因为不可感知的对抗性噪声会威胁 DNNs 模型的性能,而且对抗性鲁棒 DNNs 对实际世界中可能随处可见的白噪声也有很强的抵抗力。为了提高 DNNs 的对抗鲁棒性,人们提出了多种对抗训练方法。之前的大多数方法都是在单一应用场景下设计的:图像分类。然而,在医学影像领域,图像分割、地标检测和物体检测比整个图像的分类更为常见。虽然分类任务和其他任务(如回归)有一些相似之处,但它们在某些方面也存在差异,例如,一些对抗训练方法使用误分类标准,这在分类中是明确定义的,但在回归中却不是。这些限制阻碍了对抗训练在许多医学影像分析任务中的应用。我们的工作有以下贡献:(1) 我们研究了现有的对抗训练方法,发现了这些方法不适合用于分割和检测任务的挑战。(2) 我们修改和调整了一些现有的对抗训练方法,使其适用于医学图像分割和检测任务。(3) 我们提出了一种用于医学图像分割和检测的通用对抗训练方法。(4) 我们利用公开的数据集在不同的医学成像任务中实现了我们的方法,包括核磁共振成像分割、头颅标志物检测和血细胞检测。实验证明了我们方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
0.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信