Testing the Effectiveness of CNN and GNN and Exploring the Influence of Different Channels on Decoding Covert Speech from EEG Signals: CNN and GNN on Decoding Covert Speech from EEG Signals
{"title":"Testing the Effectiveness of CNN and GNN and Exploring the Influence of Different Channels on Decoding Covert Speech from EEG Signals: CNN and GNN on Decoding Covert Speech from EEG Signals","authors":"Serena Liu, Jonathan H. Chan","doi":"10.1145/3486713.3486733","DOIUrl":null,"url":null,"abstract":"In this paper, the effectiveness of two deep learning models was tested and the significance of 62 different electroencephalogram (EEG) channels were explored on covert speech classification tasks using time series EEG signals. Experiments were done on the classification between the words “in” and “cooperate” from the ASU dataset and the classification between 11 different prompts from the KaraOne dataset. The types of deep learning models used are the 1D convolutional neural network (CNN) and the graphical neural network (GNN). Overall, the CNN model showed decent performance with an accuracy of around 80% on the classification between “in” and “cooperate”, while the GNN seemed to be unsuitable for time series data. By examining the accuracy of the CNN model trained on different EEG channels, the prefrontal and frontal regions appeared to be the most relevant to the performance of the model. Although this finding is noticeably different from various previous works, it could provide possible insights into the cortical activities behind covert speech.","PeriodicalId":268366,"journal":{"name":"The 12th International Conference on Computational Systems-Biology and Bioinformatics","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 12th International Conference on Computational Systems-Biology and Bioinformatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3486713.3486733","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, the effectiveness of two deep learning models was tested and the significance of 62 different electroencephalogram (EEG) channels were explored on covert speech classification tasks using time series EEG signals. Experiments were done on the classification between the words “in” and “cooperate” from the ASU dataset and the classification between 11 different prompts from the KaraOne dataset. The types of deep learning models used are the 1D convolutional neural network (CNN) and the graphical neural network (GNN). Overall, the CNN model showed decent performance with an accuracy of around 80% on the classification between “in” and “cooperate”, while the GNN seemed to be unsuitable for time series data. By examining the accuracy of the CNN model trained on different EEG channels, the prefrontal and frontal regions appeared to be the most relevant to the performance of the model. Although this finding is noticeably different from various previous works, it could provide possible insights into the cortical activities behind covert speech.