{"title":"Elevating adversarial robustness by contrastive multitasking defence in medical image segmentation","authors":"Sneha Shukla, Puneet Gupta","doi":"10.1016/j.neunet.2025.108182","DOIUrl":null,"url":null,"abstract":"<div><div>Although Deep Learning (DL)-based Medical Image Segmentation (MIS) models are critically important, adversarial attacks substantially diminish their efficacy. Such attacks subtly perturb inputs, causing the model to produce inaccurate predictions. This problem is more prevalent in medical images, as their intricate textures can mislead the model to focus on irrelevant regions, undermining performance and robustness. Thus, defending against adversarial attacks is crucial for a robust DL-based MIS model. While existing defences have proven effective in non-medical domains, their impact in medical domains remains limited. To bridge this gap, we propose a novel defence, <strong>CEASE</strong> (<strong>C</strong>ontrastiv<strong>E</strong> Multit<strong>AS</strong>king D<strong>E</strong>fence), to significantly enhance the adversarial resilience of MIS models, delivering notable performance gain. <em>CEASE</em> exhibits contrastive learning, multitask learning, and their consolidation-based defence. Initially, we investigate the importance of contrastive learning in a DL-based MIS model. It leverages the observation that learning similar features for clean, adversarial, and augmented samples during training significantly enhances adversarial robustness. Subsequently, our proposed multitask learning-based defence provides generic feature representation and selects auxiliary tasks based on their weak relevance to the main task, improving model robustness. Eventually, we leverage the advantages of contrastive and multitask learning to propose their fusion-based defence. It employs contrastive learning specifically for MIS tasks and follows the proposed multitask model architecture. Experiments on publicly available datasets across several state-of-the-art MIS models reveal that <em>CEASE</em> surpasses the well-known defences by mitigating the efficacy of adversarial attacks up to 0% attack success rate on maximum average distortion with modest performance advancement.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108182"},"PeriodicalIF":6.3000,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025010627","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Although Deep Learning (DL)-based Medical Image Segmentation (MIS) models are critically important, adversarial attacks substantially diminish their efficacy. Such attacks subtly perturb inputs, causing the model to produce inaccurate predictions. This problem is more prevalent in medical images, as their intricate textures can mislead the model to focus on irrelevant regions, undermining performance and robustness. Thus, defending against adversarial attacks is crucial for a robust DL-based MIS model. While existing defences have proven effective in non-medical domains, their impact in medical domains remains limited. To bridge this gap, we propose a novel defence, CEASE (ContrastivE MultitASking DEfence), to significantly enhance the adversarial resilience of MIS models, delivering notable performance gain. CEASE exhibits contrastive learning, multitask learning, and their consolidation-based defence. Initially, we investigate the importance of contrastive learning in a DL-based MIS model. It leverages the observation that learning similar features for clean, adversarial, and augmented samples during training significantly enhances adversarial robustness. Subsequently, our proposed multitask learning-based defence provides generic feature representation and selects auxiliary tasks based on their weak relevance to the main task, improving model robustness. Eventually, we leverage the advantages of contrastive and multitask learning to propose their fusion-based defence. It employs contrastive learning specifically for MIS tasks and follows the proposed multitask model architecture. Experiments on publicly available datasets across several state-of-the-art MIS models reveal that CEASE surpasses the well-known defences by mitigating the efficacy of adversarial attacks up to 0% attack success rate on maximum average distortion with modest performance advancement.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.