Elevating adversarial robustness by contrastive multitasking defence in medical image segmentation

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Sneha Shukla, Puneet Gupta
{"title":"Elevating adversarial robustness by contrastive multitasking defence in medical image segmentation","authors":"Sneha Shukla,&nbsp;Puneet Gupta","doi":"10.1016/j.neunet.2025.108182","DOIUrl":null,"url":null,"abstract":"<div><div>Although Deep Learning (DL)-based Medical Image Segmentation (MIS) models are critically important, adversarial attacks substantially diminish their efficacy. Such attacks subtly perturb inputs, causing the model to produce inaccurate predictions. This problem is more prevalent in medical images, as their intricate textures can mislead the model to focus on irrelevant regions, undermining performance and robustness. Thus, defending against adversarial attacks is crucial for a robust DL-based MIS model. While existing defences have proven effective in non-medical domains, their impact in medical domains remains limited. To bridge this gap, we propose a novel defence, <strong>CEASE</strong> (<strong>C</strong>ontrastiv<strong>E</strong> Multit<strong>AS</strong>king D<strong>E</strong>fence), to significantly enhance the adversarial resilience of MIS models, delivering notable performance gain. <em>CEASE</em> exhibits contrastive learning, multitask learning, and their consolidation-based defence. Initially, we investigate the importance of contrastive learning in a DL-based MIS model. It leverages the observation that learning similar features for clean, adversarial, and augmented samples during training significantly enhances adversarial robustness. Subsequently, our proposed multitask learning-based defence provides generic feature representation and selects auxiliary tasks based on their weak relevance to the main task, improving model robustness. Eventually, we leverage the advantages of contrastive and multitask learning to propose their fusion-based defence. It employs contrastive learning specifically for MIS tasks and follows the proposed multitask model architecture. Experiments on publicly available datasets across several state-of-the-art MIS models reveal that <em>CEASE</em> surpasses the well-known defences by mitigating the efficacy of adversarial attacks up to 0% attack success rate on maximum average distortion with modest performance advancement.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108182"},"PeriodicalIF":6.3000,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025010627","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Although Deep Learning (DL)-based Medical Image Segmentation (MIS) models are critically important, adversarial attacks substantially diminish their efficacy. Such attacks subtly perturb inputs, causing the model to produce inaccurate predictions. This problem is more prevalent in medical images, as their intricate textures can mislead the model to focus on irrelevant regions, undermining performance and robustness. Thus, defending against adversarial attacks is crucial for a robust DL-based MIS model. While existing defences have proven effective in non-medical domains, their impact in medical domains remains limited. To bridge this gap, we propose a novel defence, CEASE (ContrastivE MultitASking DEfence), to significantly enhance the adversarial resilience of MIS models, delivering notable performance gain. CEASE exhibits contrastive learning, multitask learning, and their consolidation-based defence. Initially, we investigate the importance of contrastive learning in a DL-based MIS model. It leverages the observation that learning similar features for clean, adversarial, and augmented samples during training significantly enhances adversarial robustness. Subsequently, our proposed multitask learning-based defence provides generic feature representation and selects auxiliary tasks based on their weak relevance to the main task, improving model robustness. Eventually, we leverage the advantages of contrastive and multitask learning to propose their fusion-based defence. It employs contrastive learning specifically for MIS tasks and follows the proposed multitask model architecture. Experiments on publicly available datasets across several state-of-the-art MIS models reveal that CEASE surpasses the well-known defences by mitigating the efficacy of adversarial attacks up to 0% attack success rate on maximum average distortion with modest performance advancement.
基于多任务防御的医学图像分割对抗鲁棒性研究
尽管基于深度学习(DL)的医学图像分割(MIS)模型至关重要,但对抗性攻击大大降低了其有效性。这种攻击会微妙地扰乱输入,导致模型产生不准确的预测。这个问题在医学图像中更为普遍,因为它们复杂的纹理会误导模型关注不相关的区域,从而破坏性能和鲁棒性。因此,防御对抗性攻击对于健壮的基于dl的MIS模型至关重要。虽然现有的防御措施在非医疗领域已被证明是有效的,但它们在医疗领域的影响仍然有限。为了弥补这一差距,我们提出了一种新的防御方法,即停止(对比多任务防御),以显着增强MIS模型的对抗弹性,提供显着的性能增益。休表现出对比学习、多任务学习和基于巩固的防御。首先,我们研究了基于dl的MIS模型中对比学习的重要性。它利用了在训练期间学习干净、对抗和增强样本的相似特征显著增强对抗鲁棒性的观察结果。随后,我们提出的基于多任务学习的防御提供了通用特征表示,并根据辅助任务与主任务的弱相关性选择辅助任务,提高了模型的鲁棒性。最后,我们利用对比和多任务学习的优势,提出了基于融合的防御方法。它采用了专门针对MIS任务的对比学习,并遵循了所提出的多任务模型架构。在几个最先进的MIS模型上公开可用的数据集上进行的实验表明,通过将对抗性攻击的效率降低到0%,在最大平均失真和适度的性能进步的情况下,stop超越了众所周知的防御。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信