Junmei Sun, Xin Zhang, Xiumei Li, Lei Xiao, Huang Bai, Meixi Wang, Maoqun Yao
{"title":"mafl攻击:一种针对基于深度学习的医学图像分割模型的针对性攻击方法。","authors":"Junmei Sun, Xin Zhang, Xiumei Li, Lei Xiao, Huang Bai, Meixi Wang, Maoqun Yao","doi":"10.1117/1.JMI.12.4.044501","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Medical image segmentation based on deep learning has played a crucial role in computer-aided medical diagnosis. However, they are still vulnerable to imperceptible adversarial attacks, which lead to potential misdiagnosis in clinical practice. Research on adversarial attack methods is beneficial for improving the robustness design of medical image segmentation models. Currently, there is a lack of research on adversarial attack methods toward deep learning-based medical image segmentation models. Existing attack methods often yield poor results in terms of both attack effects and image quality of adversarial examples and primarily focus on nontargeted attacks. To address these limitations and further investigate adversarial attacks on segmentation models, we propose an adversarial attack approach.</p><p><strong>Approach: </strong>We propose an approach called momentum-driven adaptive feature-cosine-similarity with low-frequency constraint attack (MAFL-Attack). The proposed feature-cosine-similarity loss uses high-level abstract semantic information to interfere with the understanding of models about adversarial examples. The low-frequency component constraint ensures the imperceptibility of adversarial examples by constraining the low-frequency components. In addition, the momentum and dynamic step-size calculator are used to enhance the attack process.</p><p><strong>Results: </strong>Experimental results demonstrate that MAFL-Attack generates adversarial examples with superior targeted attack effects compared with the existing Adaptive Segmentation Mask Attack method, in terms of the evaluation metrics of Intersection over Union, accuracy, <math> <mrow> <msub><mrow><mi>L</mi></mrow> <mrow><mn>2</mn></mrow> </msub> </mrow> </math> , <math> <mrow> <msub><mrow><mi>L</mi></mrow> <mrow><mo>∞</mo></mrow> </msub> </mrow> </math> , Peak Signal to Noise Ratio, and Structure Similarity Index Measure.</p><p><strong>Conclusions: </strong>The design idea of the MAFL-Attack inspires researchers to take corresponding defensive measures to strengthen the robustness of segmentation models.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044501"},"PeriodicalIF":1.7000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12266980/pdf/","citationCount":"0","resultStr":"{\"title\":\"MAFL-Attack: a targeted attack method against deep learning-based medical image segmentation models.\",\"authors\":\"Junmei Sun, Xin Zhang, Xiumei Li, Lei Xiao, Huang Bai, Meixi Wang, Maoqun Yao\",\"doi\":\"10.1117/1.JMI.12.4.044501\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Medical image segmentation based on deep learning has played a crucial role in computer-aided medical diagnosis. However, they are still vulnerable to imperceptible adversarial attacks, which lead to potential misdiagnosis in clinical practice. Research on adversarial attack methods is beneficial for improving the robustness design of medical image segmentation models. Currently, there is a lack of research on adversarial attack methods toward deep learning-based medical image segmentation models. Existing attack methods often yield poor results in terms of both attack effects and image quality of adversarial examples and primarily focus on nontargeted attacks. To address these limitations and further investigate adversarial attacks on segmentation models, we propose an adversarial attack approach.</p><p><strong>Approach: </strong>We propose an approach called momentum-driven adaptive feature-cosine-similarity with low-frequency constraint attack (MAFL-Attack). The proposed feature-cosine-similarity loss uses high-level abstract semantic information to interfere with the understanding of models about adversarial examples. The low-frequency component constraint ensures the imperceptibility of adversarial examples by constraining the low-frequency components. In addition, the momentum and dynamic step-size calculator are used to enhance the attack process.</p><p><strong>Results: </strong>Experimental results demonstrate that MAFL-Attack generates adversarial examples with superior targeted attack effects compared with the existing Adaptive Segmentation Mask Attack method, in terms of the evaluation metrics of Intersection over Union, accuracy, <math> <mrow> <msub><mrow><mi>L</mi></mrow> <mrow><mn>2</mn></mrow> </msub> </mrow> </math> , <math> <mrow> <msub><mrow><mi>L</mi></mrow> <mrow><mo>∞</mo></mrow> </msub> </mrow> </math> , Peak Signal to Noise Ratio, and Structure Similarity Index Measure.</p><p><strong>Conclusions: </strong>The design idea of the MAFL-Attack inspires researchers to take corresponding defensive measures to strengthen the robustness of segmentation models.</p>\",\"PeriodicalId\":47707,\"journal\":{\"name\":\"Journal of Medical Imaging\",\"volume\":\"12 4\",\"pages\":\"044501\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12266980/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Imaging\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1117/1.JMI.12.4.044501\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/7/16 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JMI.12.4.044501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/7/16 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
MAFL-Attack: a targeted attack method against deep learning-based medical image segmentation models.
Purpose: Medical image segmentation based on deep learning has played a crucial role in computer-aided medical diagnosis. However, they are still vulnerable to imperceptible adversarial attacks, which lead to potential misdiagnosis in clinical practice. Research on adversarial attack methods is beneficial for improving the robustness design of medical image segmentation models. Currently, there is a lack of research on adversarial attack methods toward deep learning-based medical image segmentation models. Existing attack methods often yield poor results in terms of both attack effects and image quality of adversarial examples and primarily focus on nontargeted attacks. To address these limitations and further investigate adversarial attacks on segmentation models, we propose an adversarial attack approach.
Approach: We propose an approach called momentum-driven adaptive feature-cosine-similarity with low-frequency constraint attack (MAFL-Attack). The proposed feature-cosine-similarity loss uses high-level abstract semantic information to interfere with the understanding of models about adversarial examples. The low-frequency component constraint ensures the imperceptibility of adversarial examples by constraining the low-frequency components. In addition, the momentum and dynamic step-size calculator are used to enhance the attack process.
Results: Experimental results demonstrate that MAFL-Attack generates adversarial examples with superior targeted attack effects compared with the existing Adaptive Segmentation Mask Attack method, in terms of the evaluation metrics of Intersection over Union, accuracy, , , Peak Signal to Noise Ratio, and Structure Similarity Index Measure.
Conclusions: The design idea of the MAFL-Attack inspires researchers to take corresponding defensive measures to strengthen the robustness of segmentation models.
期刊介绍:
JMI covers fundamental and translational research, as well as applications, focused on medical imaging, which continue to yield physical and biomedical advancements in the early detection, diagnostics, and therapy of disease as well as in the understanding of normal. The scope of JMI includes: Imaging physics, Tomographic reconstruction algorithms (such as those in CT and MRI), Image processing and deep learning, Computer-aided diagnosis and quantitative image analysis, Visualization and modeling, Picture archiving and communications systems (PACS), Image perception and observer performance, Technology assessment, Ultrasonic imaging, Image-guided procedures, Digital pathology, Biomedical applications of biomedical imaging. JMI allows for the peer-reviewed communication and archiving of scientific developments, translational and clinical applications, reviews, and recommendations for the field.