Xin Wen, Shuting Jia, Dan Han, Yanqing Dong, Chengxin Gao, Ruochen Cao, Yanrong Hao, Yuxiang Guo, Rui Cao
{"title":"Filter banks guided correlational convolutional neural network for SSVEPs based BCI classification.","authors":"Xin Wen, Shuting Jia, Dan Han, Yanqing Dong, Chengxin Gao, Ruochen Cao, Yanrong Hao, Yuxiang Guo, Rui Cao","doi":"10.1088/1741-2552/ad7f89","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>In the field of steady-state visual evoked potential brain computer interfaces (SSVEP-BCIs) research, convolutional neural networks (CNNs) have gradually been proved to be an effective method. Whereas, majority works apply the frequency domain characteristics in long time window to train the network, thus lead to insufficient performance of those networks in short time window. Furthermore, only the frequency domain information for classification lacks of other task-related information.<i>Approach.</i>To address these issues, we propose a time-frequency domain generalized filter-bank convolutional neural network (FBCNN-G) to improve the SSVEP-BCIs classification performance. The network integrates multiple frequency information of electroencephalogram (EEG) with template and predefined prior of sine-cosine signals to perform feature extraction, which contains correlation analyses in both template and signal aspects. Then the classification is performed at the end of the network. In addition, the method proposes the use of filter banks divided into specific frequency bands as pre-filters in the network to fully consider the fundamental and harmonic frequency characteristics of the signal.<i>Main results.</i>The proposed FBCNN-G model is compared with other methods on the public dataset Benchmark. The results manifest that this model has higher accuracy of character recognition accuracy and information transfer rates in several time windows. Particularly, in the 0.2 s time window, the mean accuracy of the proposed method reaches62.02%±5.12%, indicating its superior performance.<i>Significance.</i>The proposed FBCNN-G model is critical for the exploitation of SSVEP-BCIs character recognition models.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/ad7f89","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objective.In the field of steady-state visual evoked potential brain computer interfaces (SSVEP-BCIs) research, convolutional neural networks (CNNs) have gradually been proved to be an effective method. Whereas, majority works apply the frequency domain characteristics in long time window to train the network, thus lead to insufficient performance of those networks in short time window. Furthermore, only the frequency domain information for classification lacks of other task-related information.Approach.To address these issues, we propose a time-frequency domain generalized filter-bank convolutional neural network (FBCNN-G) to improve the SSVEP-BCIs classification performance. The network integrates multiple frequency information of electroencephalogram (EEG) with template and predefined prior of sine-cosine signals to perform feature extraction, which contains correlation analyses in both template and signal aspects. Then the classification is performed at the end of the network. In addition, the method proposes the use of filter banks divided into specific frequency bands as pre-filters in the network to fully consider the fundamental and harmonic frequency characteristics of the signal.Main results.The proposed FBCNN-G model is compared with other methods on the public dataset Benchmark. The results manifest that this model has higher accuracy of character recognition accuracy and information transfer rates in several time windows. Particularly, in the 0.2 s time window, the mean accuracy of the proposed method reaches62.02%±5.12%, indicating its superior performance.Significance.The proposed FBCNN-G model is critical for the exploitation of SSVEP-BCIs character recognition models.