基于脑电图的脑机接口中的对抗性伪影检测。

Xiaoqing Chen, Lubin Meng, Yifan Xu, Dongrui Wu
{"title":"基于脑电图的脑机接口中的对抗性伪影检测。","authors":"Xiaoqing Chen, Lubin Meng, Yifan Xu, Dongrui Wu","doi":"10.1088/1741-2552/ad8964","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective</i>. machine learning has achieved significant success in electroencephalogram (EEG) based brain-computer interfaces (BCIs), with most existing research focusing on improving the decoding accuracy. However, recent studies have shown that EEG-based BCIs are vulnerable to adversarial attacks, where small perturbations added to the input can cause misclassification. Detecting adversarial examples is crucial for both understanding this phenomenon and developing effective defense strategies.<i>Approach</i>. this paper, for the first time, explores adversarial detection in EEG-based BCIs. We extend several popular adversarial detection approaches from computer vision to BCIs. Two new Mahalanobis distance based adversarial detection approaches, and three cosine distance based adversarial detection approaches, are also proposed, which showed promising performance in detecting three kinds of white-box attacks.<i>Main results</i>. we evaluated the performance of eight adversarial detection approaches on three EEG datasets, three neural networks, and four types of adversarial attacks. Our approach achieved an area under the curve score of up to 99.99% in detecting white-box attacks. Additionally, we assessed the transferability of different adversarial detectors to unknown attacks.<i>Significance</i>. through extensive experiments, we found that white-box attacks may be easily detected, and differences exist in the distributions of different types of adversarial examples. Our work should facilitate understanding the vulnerability of existing BCI models and developing more secure BCIs in the future.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial artifact detection in EEG-based brain-computer interfaces.\",\"authors\":\"Xiaoqing Chen, Lubin Meng, Yifan Xu, Dongrui Wu\",\"doi\":\"10.1088/1741-2552/ad8964\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>Objective</i>. machine learning has achieved significant success in electroencephalogram (EEG) based brain-computer interfaces (BCIs), with most existing research focusing on improving the decoding accuracy. However, recent studies have shown that EEG-based BCIs are vulnerable to adversarial attacks, where small perturbations added to the input can cause misclassification. Detecting adversarial examples is crucial for both understanding this phenomenon and developing effective defense strategies.<i>Approach</i>. this paper, for the first time, explores adversarial detection in EEG-based BCIs. We extend several popular adversarial detection approaches from computer vision to BCIs. Two new Mahalanobis distance based adversarial detection approaches, and three cosine distance based adversarial detection approaches, are also proposed, which showed promising performance in detecting three kinds of white-box attacks.<i>Main results</i>. we evaluated the performance of eight adversarial detection approaches on three EEG datasets, three neural networks, and four types of adversarial attacks. Our approach achieved an area under the curve score of up to 99.99% in detecting white-box attacks. Additionally, we assessed the transferability of different adversarial detectors to unknown attacks.<i>Significance</i>. through extensive experiments, we found that white-box attacks may be easily detected, and differences exist in the distributions of different types of adversarial examples. Our work should facilitate understanding the vulnerability of existing BCI models and developing more secure BCIs in the future.</p>\",\"PeriodicalId\":94096,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/ad8964\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/ad8964","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目的:机器学习在基于脑电图(EEG)的脑机接口(BCI)方面取得了巨大成功,现有研究大多侧重于提高解码准确性。然而,最近的研究表明,基于脑电图的 BCI 很容易受到对抗性攻击的影响,在输入中添加的微小扰动会导致错误分类。检测对抗范例对于理解这一现象和制定有效的防御策略至关重要:本文首次探讨了基于脑电图的 BCI 中的对抗检测。我们将计算机视觉中几种流行的对抗检测方法扩展到了 BCI。我们还提出了两种新的基于马哈拉诺比斯距离的对抗检测方法和三种基于余弦距离的对抗检测方法,这些方法在检测三种白盒攻击方面表现出了良好的性能:我们在三个脑电图数据集、三个神经网络和四种对抗攻击中评估了八种对抗检测方法的性能。我们的方法在检测白盒攻击方面的曲线下面积(AUC)得分高达 99.99%。此外,我们还评估了不同对抗检测器对未知攻击的可转移性:通过大量实验,我们发现白盒攻击很容易被检测到,而且不同类型的对抗性实例的分布存在差异。我们的工作将有助于了解现有BCI模型的脆弱性,并在未来开发出更安全的BCI。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial artifact detection in EEG-based brain-computer interfaces.

Objective. machine learning has achieved significant success in electroencephalogram (EEG) based brain-computer interfaces (BCIs), with most existing research focusing on improving the decoding accuracy. However, recent studies have shown that EEG-based BCIs are vulnerable to adversarial attacks, where small perturbations added to the input can cause misclassification. Detecting adversarial examples is crucial for both understanding this phenomenon and developing effective defense strategies.Approach. this paper, for the first time, explores adversarial detection in EEG-based BCIs. We extend several popular adversarial detection approaches from computer vision to BCIs. Two new Mahalanobis distance based adversarial detection approaches, and three cosine distance based adversarial detection approaches, are also proposed, which showed promising performance in detecting three kinds of white-box attacks.Main results. we evaluated the performance of eight adversarial detection approaches on three EEG datasets, three neural networks, and four types of adversarial attacks. Our approach achieved an area under the curve score of up to 99.99% in detecting white-box attacks. Additionally, we assessed the transferability of different adversarial detectors to unknown attacks.Significance. through extensive experiments, we found that white-box attacks may be easily detected, and differences exist in the distributions of different types of adversarial examples. Our work should facilitate understanding the vulnerability of existing BCI models and developing more secure BCIs in the future.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信