Sheikh Burhan Ul Haque , Aasim Zafar , Sheikh Moeen ul haque , Sheikh Riyaz ul Haq , Mohassin Ahmad
{"title":"保护医疗保健中的人工智能:三层防御以减轻放射成像中的对抗性噪声影响","authors":"Sheikh Burhan Ul Haque , Aasim Zafar , Sheikh Moeen ul haque , Sheikh Riyaz ul Haq , Mohassin Ahmad","doi":"10.1016/j.bspc.2025.107969","DOIUrl":null,"url":null,"abstract":"<div><div>Early detection of lung nodules through CT imaging is crucial for timely treatment and improved patient outcomes. Artificial intelligence (AI), particularly deep learning (DL), has shown exceptional promise, often surpassing human expertise in diagnosing lung cancer. However, the vulnerability of DL models to adversarial noise—imperceptible perturbations designed to mislead models—remains underexplored in medical imaging. To the best of our knowledge, this is the first study to comprehensively analyze the effects of targeted and untargeted adversarial noise on DL-based medical diagnosis models. Additionally, we propose a novel three-tier defense strategy to mitigate these adversarial impacts on radiology images. The proposed approach combines modified adversarial training (MAT) during the training phase with Total Variation Minimization (TVM) flowed by bit-plane slicing (BPS) at the testing phase, ensuring robust performance against adversarial attacks in all the phases. MAT strengthens model resilience by exposing it to adversarial examples with varying epsilon values, improving its ability to counter diverse perturbations. At inference, TVM reduces high-frequency adversarial noise while preserving essential image structures, and BPS further enhances robustness by extracting critical features and discarding less significant details prone to adversarial manipulation. A lung nodule classification model was developed using transfer learning with DenseNet-121, trained on the publicly available LIDC-IDRI dataset. The model achieved 95.71% training accuracy and 93.17% testing accuracy on clean images. However, when exposed to adversarial attacks such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), accuracy dropped significantly to 13.74% under FGSM and 1.32% under PGD. The proposed defense strategy successfully restored performance, achieving an average accuracy of approximately 93% against both FGSM and PGD attacks. These results demonstrate that the defense approach effectively mitigates adversarial noise across both training and testing phases, improving the reliability of DL models in medical image analysis. By enhancing robustness in lung cancer detection, this study contributes to the advancement of AI-driven healthcare, ensuring safer and more trustworthy diagnostic systems.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"109 ","pages":"Article 107969"},"PeriodicalIF":4.9000,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Securing AI in Healthcare: A Three-Layer Defense to Mitigate Adversarial Noise Impact in Radiology Imaging\",\"authors\":\"Sheikh Burhan Ul Haque , Aasim Zafar , Sheikh Moeen ul haque , Sheikh Riyaz ul Haq , Mohassin Ahmad\",\"doi\":\"10.1016/j.bspc.2025.107969\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Early detection of lung nodules through CT imaging is crucial for timely treatment and improved patient outcomes. Artificial intelligence (AI), particularly deep learning (DL), has shown exceptional promise, often surpassing human expertise in diagnosing lung cancer. However, the vulnerability of DL models to adversarial noise—imperceptible perturbations designed to mislead models—remains underexplored in medical imaging. To the best of our knowledge, this is the first study to comprehensively analyze the effects of targeted and untargeted adversarial noise on DL-based medical diagnosis models. Additionally, we propose a novel three-tier defense strategy to mitigate these adversarial impacts on radiology images. The proposed approach combines modified adversarial training (MAT) during the training phase with Total Variation Minimization (TVM) flowed by bit-plane slicing (BPS) at the testing phase, ensuring robust performance against adversarial attacks in all the phases. MAT strengthens model resilience by exposing it to adversarial examples with varying epsilon values, improving its ability to counter diverse perturbations. At inference, TVM reduces high-frequency adversarial noise while preserving essential image structures, and BPS further enhances robustness by extracting critical features and discarding less significant details prone to adversarial manipulation. A lung nodule classification model was developed using transfer learning with DenseNet-121, trained on the publicly available LIDC-IDRI dataset. The model achieved 95.71% training accuracy and 93.17% testing accuracy on clean images. However, when exposed to adversarial attacks such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), accuracy dropped significantly to 13.74% under FGSM and 1.32% under PGD. The proposed defense strategy successfully restored performance, achieving an average accuracy of approximately 93% against both FGSM and PGD attacks. These results demonstrate that the defense approach effectively mitigates adversarial noise across both training and testing phases, improving the reliability of DL models in medical image analysis. By enhancing robustness in lung cancer detection, this study contributes to the advancement of AI-driven healthcare, ensuring safer and more trustworthy diagnostic systems.</div></div>\",\"PeriodicalId\":55362,\"journal\":{\"name\":\"Biomedical Signal Processing and Control\",\"volume\":\"109 \",\"pages\":\"Article 107969\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-05-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Signal Processing and Control\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S174680942500480X\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S174680942500480X","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Securing AI in Healthcare: A Three-Layer Defense to Mitigate Adversarial Noise Impact in Radiology Imaging
Early detection of lung nodules through CT imaging is crucial for timely treatment and improved patient outcomes. Artificial intelligence (AI), particularly deep learning (DL), has shown exceptional promise, often surpassing human expertise in diagnosing lung cancer. However, the vulnerability of DL models to adversarial noise—imperceptible perturbations designed to mislead models—remains underexplored in medical imaging. To the best of our knowledge, this is the first study to comprehensively analyze the effects of targeted and untargeted adversarial noise on DL-based medical diagnosis models. Additionally, we propose a novel three-tier defense strategy to mitigate these adversarial impacts on radiology images. The proposed approach combines modified adversarial training (MAT) during the training phase with Total Variation Minimization (TVM) flowed by bit-plane slicing (BPS) at the testing phase, ensuring robust performance against adversarial attacks in all the phases. MAT strengthens model resilience by exposing it to adversarial examples with varying epsilon values, improving its ability to counter diverse perturbations. At inference, TVM reduces high-frequency adversarial noise while preserving essential image structures, and BPS further enhances robustness by extracting critical features and discarding less significant details prone to adversarial manipulation. A lung nodule classification model was developed using transfer learning with DenseNet-121, trained on the publicly available LIDC-IDRI dataset. The model achieved 95.71% training accuracy and 93.17% testing accuracy on clean images. However, when exposed to adversarial attacks such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), accuracy dropped significantly to 13.74% under FGSM and 1.32% under PGD. The proposed defense strategy successfully restored performance, achieving an average accuracy of approximately 93% against both FGSM and PGD attacks. These results demonstrate that the defense approach effectively mitigates adversarial noise across both training and testing phases, improving the reliability of DL models in medical image analysis. By enhancing robustness in lung cancer detection, this study contributes to the advancement of AI-driven healthcare, ensuring safer and more trustworthy diagnostic systems.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.