Adar: Adversarial Activity Recognition in Wearables

Ramesh Kumar Sah, Hassan Ghasemzadeh
{"title":"Adar: Adversarial Activity Recognition in Wearables","authors":"Ramesh Kumar Sah, Hassan Ghasemzadeh","doi":"10.1109/iccad45719.2019.8942124","DOIUrl":null,"url":null,"abstract":"Recent advances in machine learning and deep neural networks have led to the realization of many important applications in the area of personalized medicine. Whether it is detecting activities of daily living or analyzing images for cancerous cells, machine learning algorithms have become the dominant choice for such emerging applications. In particular, the state-of-the-art algorithms used for human activity recognition (HAR) using wearable inertial sensors utilize machine learning algorithms to detect health events and to make predictions from sensor data. Currently, however, there remains a gap in research on whether or not and how activity recognition algorithms may become the subject of adversarial attacks. In this paper, we take the first strides on (1) investigating methods of generating adversarial example in the context of HAR systems; (2) studying the vulnerability of activity recognition models to adversarial examples in feature and signal domain; and (3) investigating the effects of adversarial training on HAR systems. We introduce Adar11Software code and experimental data for Adar are available online at https://github.com/rameshKrSah/Adar., a novel computational framework for optimization-driven creation of adversarial examples in sensor-based activity recognition systems. Through extensive analysis based on real sensor data collected with human subjects, we found that simple evasion attacks are able to decrease the accuracy of a deep neural network from 95.1% to 3.4% and from 93.1% to 16.8% in the case of a convolutional neural network. With adversarial training, the robustness of the deep neural network increased on the adversarial examples by 49.1% in the worst case while the accuracy on clean samples decreased by 13.2%.","PeriodicalId":363364,"journal":{"name":"2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccad45719.2019.8942124","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Recent advances in machine learning and deep neural networks have led to the realization of many important applications in the area of personalized medicine. Whether it is detecting activities of daily living or analyzing images for cancerous cells, machine learning algorithms have become the dominant choice for such emerging applications. In particular, the state-of-the-art algorithms used for human activity recognition (HAR) using wearable inertial sensors utilize machine learning algorithms to detect health events and to make predictions from sensor data. Currently, however, there remains a gap in research on whether or not and how activity recognition algorithms may become the subject of adversarial attacks. In this paper, we take the first strides on (1) investigating methods of generating adversarial example in the context of HAR systems; (2) studying the vulnerability of activity recognition models to adversarial examples in feature and signal domain; and (3) investigating the effects of adversarial training on HAR systems. We introduce Adar11Software code and experimental data for Adar are available online at https://github.com/rameshKrSah/Adar., a novel computational framework for optimization-driven creation of adversarial examples in sensor-based activity recognition systems. Through extensive analysis based on real sensor data collected with human subjects, we found that simple evasion attacks are able to decrease the accuracy of a deep neural network from 95.1% to 3.4% and from 93.1% to 16.8% in the case of a convolutional neural network. With adversarial training, the robustness of the deep neural network increased on the adversarial examples by 49.1% in the worst case while the accuracy on clean samples decreased by 13.2%.
Adar:可穿戴设备中的对抗活动识别
机器学习和深度神经网络的最新进展已经在个性化医疗领域实现了许多重要的应用。无论是检测日常生活活动还是分析癌细胞图像,机器学习算法已经成为这些新兴应用的主导选择。特别是,用于使用可穿戴惯性传感器的人类活动识别(HAR)的最先进算法利用机器学习算法来检测健康事件并根据传感器数据进行预测。然而,目前关于活动识别算法是否以及如何成为对抗性攻击的主题的研究仍然存在空白。在本文中,我们在(1)研究HAR系统中生成对抗性示例的方法上迈出了第一步;(2)研究活动识别模型在特征域和信号域对对抗样本的脆弱性;(3)研究对抗性训练对HAR系统的影响。Adar的软件代码和实验数据可在https://github.com/rameshKrSah/Adar.上在线获得,这是一个新的计算框架,用于在基于传感器的活动识别系统中优化驱动对抗性示例的创建。通过对人体真实传感器数据的广泛分析,我们发现简单的逃避攻击能够将深度神经网络的准确率从95.1%降低到3.4%,在卷积神经网络的情况下从93.1%降低到16.8%。通过对抗性训练,在最坏情况下,深度神经网络在对抗性样本上的鲁棒性提高了49.1%,而在干净样本上的准确率下降了13.2%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信