Analysis of the Effect of Black Box Adversarial Attacks on Medical Image Classification Models

P. S, A. V, Sreeratcha B, Preeti Krishnaveni Ra, Snofy D. Dunston, M. Rajam V.
{"title":"Analysis of the Effect of Black Box Adversarial Attacks on Medical Image Classification Models","authors":"P. S, A. V, Sreeratcha B, Preeti Krishnaveni Ra, Snofy D. Dunston, M. Rajam V.","doi":"10.1109/ICICICT54557.2022.9917603","DOIUrl":null,"url":null,"abstract":"In the field of medical science, the reliability of the results produced by deep learning classifiers on disease diagnosis plays a crucial role. The reliability of the classifier substantially reduces by the presence of adversarial examples. The adversarial examples mislead the classifiers to give wrong prediction with equal or more confidence than the actual prediction. The adversarial attacks in the black box type is done by creating a pseudo model that resembles the target model. From the pseudo model, the attack is created and is transferred to the target model. In this work, the Fast Gradient Sign Method and its variants Momentum Iterative Fast Gradient Sign Method, Projected Gradient Descent and Basic Iterative Method are used to create adversarial examples on a target VGG-16 model. The datasets used are Diabetic Retinopathy 2015 Data Colored Resized and SARS-CoV-2 CT Scan Dataset. The experimentation revealed that the transferability of attack is true for the above described attack methods on a VGG-16 model. Also, the Projected Gradient Descent attack provides a higher success in attack in comparison with the other methods experimented in this work.","PeriodicalId":246214,"journal":{"name":"2022 Third International Conference on Intelligent Computing Instrumentation and Control Technologies (ICICICT)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Third International Conference on Intelligent Computing Instrumentation and Control Technologies (ICICICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICICT54557.2022.9917603","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

In the field of medical science, the reliability of the results produced by deep learning classifiers on disease diagnosis plays a crucial role. The reliability of the classifier substantially reduces by the presence of adversarial examples. The adversarial examples mislead the classifiers to give wrong prediction with equal or more confidence than the actual prediction. The adversarial attacks in the black box type is done by creating a pseudo model that resembles the target model. From the pseudo model, the attack is created and is transferred to the target model. In this work, the Fast Gradient Sign Method and its variants Momentum Iterative Fast Gradient Sign Method, Projected Gradient Descent and Basic Iterative Method are used to create adversarial examples on a target VGG-16 model. The datasets used are Diabetic Retinopathy 2015 Data Colored Resized and SARS-CoV-2 CT Scan Dataset. The experimentation revealed that the transferability of attack is true for the above described attack methods on a VGG-16 model. Also, the Projected Gradient Descent attack provides a higher success in attack in comparison with the other methods experimented in this work.
黑箱对抗性攻击对医学图像分类模型的影响分析
在医学领域,深度学习分类器对疾病诊断结果的可靠性起着至关重要的作用。由于存在对抗性样本,分类器的可靠性大大降低。对抗性示例误导分类器给出与实际预测相同或更高置信度的错误预测。黑盒类型中的对抗性攻击是通过创建一个类似于目标模型的伪模型来完成的。从伪模型开始,创建攻击并将其转移到目标模型。在这项工作中,使用快速梯度符号法及其变体动量迭代快速梯度符号法,投影梯度下降法和基本迭代法在目标VGG-16模型上创建对抗样例。使用的数据集是糖尿病视网膜病变2015年数据彩色调整大小和SARS-CoV-2 CT扫描数据集。实验表明,上述攻击方法在VGG-16模型上具有攻击的可转移性。此外,与其他实验方法相比,投影梯度下降攻击的攻击成功率更高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信