Hard-Label Black-Box Adversarial Attack on Deep Electrocardiogram Classifier

Jonathan Lam, Pengrui Quan, Jiaming Xu, J. Jeyakumar, M. Srivastava
{"title":"Hard-Label Black-Box Adversarial Attack on Deep Electrocardiogram Classifier","authors":"Jonathan Lam, Pengrui Quan, Jiaming Xu, J. Jeyakumar, M. Srivastava","doi":"10.1145/3417312.3431827","DOIUrl":null,"url":null,"abstract":"Through aiding the process of diagnosing cardiovascular diseases (CVD) such as arrhythmia, electrocardiograms (ECGs) have progressively improved prospects for an automated diagnosis system in modern healthcare. Recent years have seen the promising applications of deep neural networks (DNNs) in analyzing ECG data, even outperforming cardiovascular experts in identifying certain rhythm irregularities. However, DNNs have shown to be susceptible to adversarial attacks, which intentionally compromise the models by adding perturbations to the inputs. This concept is also applicable to DNN-based ECG classifiers and the prior works generate these adversarial attacks in a white-box setting where the model details are exposed to the attackers. However, the black-box condition, where the classification model's architecture and parameters are unknown to the attackers, remains mostly unexplored. Thus, we aim to fool ECG classifiers in the black-box and hard-label setting where given an input, only the final predicted category is visible to the attacker. Our attack on the DNN classification model for the PhysioNet Computing in Cardiology Challenge 2017 [12] database produced ECG data sets mostly indistinguishable from the white-box version of an adversarial attack on this same database. Our results demonstrate that we can effectively generate the adversarial ECG inputs in this black-box setting, which raises significant concerns regarding the potential applications of DNN-based ECG classifiers in security-critical systems.","PeriodicalId":361484,"journal":{"name":"Proceedings of the 1st ACM International Workshop on Security and Safety for Intelligent Cyber-Physical Systems","volume":"118 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st ACM International Workshop on Security and Safety for Intelligent Cyber-Physical Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3417312.3431827","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Through aiding the process of diagnosing cardiovascular diseases (CVD) such as arrhythmia, electrocardiograms (ECGs) have progressively improved prospects for an automated diagnosis system in modern healthcare. Recent years have seen the promising applications of deep neural networks (DNNs) in analyzing ECG data, even outperforming cardiovascular experts in identifying certain rhythm irregularities. However, DNNs have shown to be susceptible to adversarial attacks, which intentionally compromise the models by adding perturbations to the inputs. This concept is also applicable to DNN-based ECG classifiers and the prior works generate these adversarial attacks in a white-box setting where the model details are exposed to the attackers. However, the black-box condition, where the classification model's architecture and parameters are unknown to the attackers, remains mostly unexplored. Thus, we aim to fool ECG classifiers in the black-box and hard-label setting where given an input, only the final predicted category is visible to the attacker. Our attack on the DNN classification model for the PhysioNet Computing in Cardiology Challenge 2017 [12] database produced ECG data sets mostly indistinguishable from the white-box version of an adversarial attack on this same database. Our results demonstrate that we can effectively generate the adversarial ECG inputs in this black-box setting, which raises significant concerns regarding the potential applications of DNN-based ECG classifiers in security-critical systems.
深度心电图分类器的硬标签黑盒对抗攻击
通过帮助诊断心血管疾病(CVD),如心律失常,心电图(ECGs)在现代医疗保健中的自动化诊断系统的前景逐步改善。近年来,深度神经网络(dnn)在分析ECG数据方面的应用前景广阔,甚至在识别某些心律失常方面优于心血管专家。然而,dnn已经显示出容易受到对抗性攻击的影响,这种攻击通过在输入中添加扰动来故意破坏模型。这个概念也适用于基于dnn的ECG分类器,之前的工作在白盒设置中生成这些对抗性攻击,其中模型细节暴露给攻击者。然而,攻击者不知道分类模型的体系结构和参数的黑盒条件,大部分仍未被探索。因此,我们的目标是在黑盒和硬标签设置中欺骗ECG分类器,在给定输入的情况下,攻击者只能看到最终的预测类别。我们对PhysioNet Computing in Cardiology Challenge 2017[12]数据库的DNN分类模型的攻击产生了ECG数据集,这些数据集与针对同一数据库的对抗性攻击的白盒版本几乎没有区别。我们的研究结果表明,我们可以在这种黑盒设置中有效地生成对抗性ECG输入,这引起了对基于dnn的ECG分类器在安全关键系统中的潜在应用的重大关注。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信