PoisonHD: Poison Attack on Brain-Inspired Hyperdimensional Computing

Ruixuan Wang, Xun Jiao
{"title":"PoisonHD: Poison Attack on Brain-Inspired Hyperdimensional Computing","authors":"Ruixuan Wang, Xun Jiao","doi":"10.23919/DATE54114.2022.9774641","DOIUrl":null,"url":null,"abstract":"While machine learning (ML) methods especially deep neural networks (DNNs) promise enormous societal and economic benefits, their deployments present daunting challenges due to intensive computational demands and high storage requirements. Brain-inspired hyperdimensional computing (HDC) has recently been introduced as an alternative computational model that mimics the “human brain” at the functionality level. HDC has already demonstrated promising accuracy and efficiency in multiple application domains including healthcare and robotics. However, the robustness and security aspects of HDC has not been systematically investigated and sufficiently examined. Poison attack is a commonly-seen attack on various ML models including DNNs. It injects noises to labels of training data to introduce classification error of ML models. This paper presents PoisonHD, an HDC-specific poison attack framework that maximizes its effectiveness in degrading the classification accuracy by leveraging the internal structural information of HDC models. By applying PoisonHD on three datasets, we show that PoisonHD can cause significantly greater accuracy drop on HDC model than a random label-flipping approach. We further develop a defense mechanism by designing an HDC-based data sanitization that can significantly recover the accuracy loss caused by poison attack. To the best of our knowledge, this is the first paper that studies the poison attack on HDC models.","PeriodicalId":232583,"journal":{"name":"2022 Design, Automation & Test in Europe Conference & Exhibition (DATE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Design, Automation & Test in Europe Conference & Exhibition (DATE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/DATE54114.2022.9774641","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

While machine learning (ML) methods especially deep neural networks (DNNs) promise enormous societal and economic benefits, their deployments present daunting challenges due to intensive computational demands and high storage requirements. Brain-inspired hyperdimensional computing (HDC) has recently been introduced as an alternative computational model that mimics the “human brain” at the functionality level. HDC has already demonstrated promising accuracy and efficiency in multiple application domains including healthcare and robotics. However, the robustness and security aspects of HDC has not been systematically investigated and sufficiently examined. Poison attack is a commonly-seen attack on various ML models including DNNs. It injects noises to labels of training data to introduce classification error of ML models. This paper presents PoisonHD, an HDC-specific poison attack framework that maximizes its effectiveness in degrading the classification accuracy by leveraging the internal structural information of HDC models. By applying PoisonHD on three datasets, we show that PoisonHD can cause significantly greater accuracy drop on HDC model than a random label-flipping approach. We further develop a defense mechanism by designing an HDC-based data sanitization that can significantly recover the accuracy loss caused by poison attack. To the best of our knowledge, this is the first paper that studies the poison attack on HDC models.
PoisonHD:对大脑启发的超维度计算的毒药攻击
虽然机器学习(ML)方法,尤其是深度神经网络(dnn)有望带来巨大的社会和经济效益,但由于密集的计算需求和高存储要求,它们的部署面临着艰巨的挑战。受大脑启发的超维计算(HDC)最近作为一种替代计算模型被引入,它在功能层面上模仿“人类大脑”。HDC已经在包括医疗保健和机器人在内的多个应用领域展示了良好的准确性和效率。然而,HDC的健壮性和安全性方面还没有得到系统的调查和充分的检查。中毒攻击是包括dnn在内的各种ML模型上常见的攻击。在训练数据的标签中注入噪声,引入机器学习模型的分类误差。本文提出了一种针对HDC的毒性攻击框架PoisonHD,该框架利用HDC模型的内部结构信息,最大限度地降低了分类精度。通过在三个数据集上应用PoisonHD,我们发现与随机标签翻转方法相比,PoisonHD在HDC模型上导致的精度下降明显更大。我们进一步开发了一种防御机制,通过设计一种基于hdc的数据清理,可以显著恢复中毒攻击造成的精度损失。据我们所知,这是第一篇研究HDC模型中毒攻击的论文。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信