A comparing network for the classification of steady-state visual evoked potential responses based on convolutional neural network

Jiczhen Xing, Shuang Qiu, Chenyao Wu, Xuelin Ma, Jinpeng Li, Huiguang He
{"title":"A comparing network for the classification of steady-state visual evoked potential responses based on convolutional neural network","authors":"Jiczhen Xing, Shuang Qiu, Chenyao Wu, Xuelin Ma, Jinpeng Li, Huiguang He","doi":"10.1109/CIVEMSA45640.2019.9071633","DOIUrl":null,"url":null,"abstract":"Brain-computer interfaces (BCIs) based on Steady-State Visual Evoked Potentials (SSVEPs) has been attracting much attention because of its high information transfer rate and little user training. However, most methods applied to decode SSVEPs are limited to CCA and some extended CCA-based methods. This study proposed a comparing network based on Convolutional Neural Network (CNN), which was used to learn the relationship between EEG signals and the templates corresponding to each stimulus frequency of SSVEPs. This novel method incorporated prior knowledge and a spatial filter (task related component analysis, TRCA) to enhance detection of SSVEPs. The effectiveness of the proposed method was validated by comparing it with the standard CCA and other state-of-the art methods for decoding SSVEPs (i.e., CNN and TRCA) on the actual SSVEP datasets collected from 17 subjects. The comparison results indicated that the CNN-based comparing network significantly could significantly improve the classification accuracy compared with the standard CCA, TRCA and CNN. Furthermore, the comparing network with TRCA achieved the best performance among three methods based on comparing network with the averaged accuracy of 84.57% (data length: 2s) and 70.21% (data length: 1s). The study validated the efficiency of the proposed CNN-based comparing methods in decoding SSVEPs. It suggests that the comparing network with TRCA is a promising methodology for target identification of SSVEPs and could further improve the performance of SSVEP-based BCI system.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIVEMSA45640.2019.9071633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Brain-computer interfaces (BCIs) based on Steady-State Visual Evoked Potentials (SSVEPs) has been attracting much attention because of its high information transfer rate and little user training. However, most methods applied to decode SSVEPs are limited to CCA and some extended CCA-based methods. This study proposed a comparing network based on Convolutional Neural Network (CNN), which was used to learn the relationship between EEG signals and the templates corresponding to each stimulus frequency of SSVEPs. This novel method incorporated prior knowledge and a spatial filter (task related component analysis, TRCA) to enhance detection of SSVEPs. The effectiveness of the proposed method was validated by comparing it with the standard CCA and other state-of-the art methods for decoding SSVEPs (i.e., CNN and TRCA) on the actual SSVEP datasets collected from 17 subjects. The comparison results indicated that the CNN-based comparing network significantly could significantly improve the classification accuracy compared with the standard CCA, TRCA and CNN. Furthermore, the comparing network with TRCA achieved the best performance among three methods based on comparing network with the averaged accuracy of 84.57% (data length: 2s) and 70.21% (data length: 1s). The study validated the efficiency of the proposed CNN-based comparing methods in decoding SSVEPs. It suggests that the comparing network with TRCA is a promising methodology for target identification of SSVEPs and could further improve the performance of SSVEP-based BCI system.
基于卷积神经网络的稳态视觉诱发电位分类比较网络
基于稳态视觉诱发电位(SSVEPs)的脑机接口以其高的信息传输速率和较少的用户训练而备受关注。然而,大多数用于解码ssvep的方法仅限于CCA和一些扩展的基于CCA的方法。本研究提出了一种基于卷积神经网络(CNN)的比较网络,用于学习ssvep各刺激频率对应的脑电信号与模板之间的关系。该方法结合了先验知识和空间滤波器(任务相关成分分析,TRCA)来增强对ssvep的检测。通过将该方法与标准CCA和其他最先进的SSVEP解码方法(即CNN和TRCA)在17个受试者的实际SSVEP数据集上进行比较,验证了该方法的有效性。对比结果表明,与标准CCA、TRCA和CNN相比,基于CNN的比较网络能显著提高分类准确率。在三种方法中,TRCA对比网络的平均准确率分别为84.57%(数据长度为2s)和70.21%(数据长度为1s),是三种方法中性能最好的。该研究验证了所提出的基于cnn的比较方法在解码ssvep方面的效率。结果表明,与TRCA的比较网络是一种很有前途的ssvep目标识别方法,可以进一步提高基于ssvep的脑机接口系统的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信