Evaluating the Adversarial Robustness of Text Classifiers in Hyperdimensional Computing

Harsha Moraliyage, Sachin Kahawala, Daswin De Silva, D. Alahakoon
{"title":"Evaluating the Adversarial Robustness of Text Classifiers in Hyperdimensional Computing","authors":"Harsha Moraliyage, Sachin Kahawala, Daswin De Silva, D. Alahakoon","doi":"10.1109/HSI55341.2022.9869459","DOIUrl":null,"url":null,"abstract":"Hyperdimensional (HD) Computing leverages random high dimensional vectors (>10000 dimensions) known as hypervectors for data representation. This high dimensional feature representation is inherently redundant which results in increased robustness against noise and it also enables the use of a computationally simple operations for all vector functions. These two properties of hypervectors have led to energy efficient and fast learning capabilities in numerous Artificial Intelligence (AI) applications. Despite the increasing number of such AI HD applications, their susceptibility to adversarial attacks has not been explored, specifically in the text domain. To the best of our knowledge, this is the first research endeavour to evaluate the adversarial robustness of HD text classifiers and report on their vulnerability to such attacks. In this paper, we designed and developed n-grams based HD computing text classifiers for two primary applications of HD computing; language recognition and text classification, and then performed a set of character level and word level grey-box adversarial attacks, where an attacker’s goal is to mislead the target HD computing classifier to produce false prediction labels while keeping added perturbation noise as low as possible. Our results show that adversarial examples generated by the attacks can mislead the HD computing classifiers to produce incorrect prediction labels. However, HD computing classifiers show a higher degree of adversarial robustness in language recognition compared to text classification tasks. The robustness of HD computing classifiers against character-level attacks is significantly higher compared to word-level attacks and has the highest accuracy compared to deep learning-based classifiers. Finally, we evaluate the effectiveness of adversarial training as a possible defense strategy against adversarial attacks in HD computing text classifiers.","PeriodicalId":282607,"journal":{"name":"2022 15th International Conference on Human System Interaction (HSI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 15th International Conference on Human System Interaction (HSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HSI55341.2022.9869459","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Hyperdimensional (HD) Computing leverages random high dimensional vectors (>10000 dimensions) known as hypervectors for data representation. This high dimensional feature representation is inherently redundant which results in increased robustness against noise and it also enables the use of a computationally simple operations for all vector functions. These two properties of hypervectors have led to energy efficient and fast learning capabilities in numerous Artificial Intelligence (AI) applications. Despite the increasing number of such AI HD applications, their susceptibility to adversarial attacks has not been explored, specifically in the text domain. To the best of our knowledge, this is the first research endeavour to evaluate the adversarial robustness of HD text classifiers and report on their vulnerability to such attacks. In this paper, we designed and developed n-grams based HD computing text classifiers for two primary applications of HD computing; language recognition and text classification, and then performed a set of character level and word level grey-box adversarial attacks, where an attacker’s goal is to mislead the target HD computing classifier to produce false prediction labels while keeping added perturbation noise as low as possible. Our results show that adversarial examples generated by the attacks can mislead the HD computing classifiers to produce incorrect prediction labels. However, HD computing classifiers show a higher degree of adversarial robustness in language recognition compared to text classification tasks. The robustness of HD computing classifiers against character-level attacks is significantly higher compared to word-level attacks and has the highest accuracy compared to deep learning-based classifiers. Finally, we evaluate the effectiveness of adversarial training as a possible defense strategy against adversarial attacks in HD computing text classifiers.
评估超维计算中文本分类器的对抗鲁棒性
超高维(HD)计算利用随机的高维向量(>10000维)来表示数据。这种高维特征表示本质上是冗余的,这增加了对噪声的鲁棒性,并且可以对所有向量函数使用计算简单的操作。超向量的这两个特性在许多人工智能(AI)应用中带来了节能和快速学习能力。尽管这样的AI HD应用程序越来越多,但它们对对抗性攻击的敏感性尚未被探索,特别是在文本领域。据我们所知,这是第一个评估HD文本分类器的对抗性鲁棒性的研究,并报告了它们对此类攻击的脆弱性。本文针对高清计算的两种主要应用,设计并开发了基于n图的高清计算文本分类器;语言识别和文本分类,然后进行一组字符级和词级灰盒对抗攻击,攻击者的目标是误导目标HD计算分类器产生错误的预测标签,同时尽可能降低添加的扰动噪声。我们的研究结果表明,攻击产生的对抗性示例会误导HD计算分类器产生错误的预测标签。然而,与文本分类任务相比,HD计算分类器在语言识别中表现出更高程度的对抗鲁棒性。HD计算分类器对字符级攻击的鲁棒性明显高于单词级攻击,并且与基于深度学习的分类器相比具有最高的准确性。最后,我们评估了对抗训练作为HD计算文本分类器对抗对抗攻击的可能防御策略的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信