Analysis of Sound Imagery in EEG with a Convolutional Neural Network and an Input-perturbation Network Prediction Technique

Sarawin Khemmachotikun, Y. Wongsawat
{"title":"Analysis of Sound Imagery in EEG with a Convolutional Neural Network and an Input-perturbation Network Prediction Technique","authors":"Sarawin Khemmachotikun, Y. Wongsawat","doi":"10.23919/SICE48898.2020.9240467","DOIUrl":null,"url":null,"abstract":"Sound imagery has been studied in past decades with techniques such as fMRI, PET, MEG, or tDCS. However, sound imagery phenomenon in EEG signal has not been widely studied. Use of deep learning in EEG applications is increasing in popularity due to the ability to learn EEG data without rich data pre-processing. In contrast to typical classification models, with the input -perturbation network prediction technique used here, we visualized the learned features from the trained model in terms of the correlation between the change in input frequency and the change in network prediction to better understand the features the model used for decision making. In this study, we recorded EEG signals from three subjects who were asked to perform a sound imagery task. In the first phase, subjects were asked to listen to and remember a generated sound; in the second phase, subjects were asked to imagine a sound of the same pitch. One-fourth of trials had no sound generation; EEG signals were labeled with the no imagery class. EEG signals from the remaining trials were labeled with the sound imagery class for model training. The best accuracy of 71.41% was obtained by the shallow model for subject 1, and an average accuracy of 61.00% was achieved between subjects. The model’s decision to classify EEG data into the sound imagery class was based on decreases in the delta, theta, and low beta bands in the frontal lobe and corresponding increases in the in the right temporal lobe of the brain.","PeriodicalId":240352,"journal":{"name":"2020 59th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 59th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/SICE48898.2020.9240467","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Sound imagery has been studied in past decades with techniques such as fMRI, PET, MEG, or tDCS. However, sound imagery phenomenon in EEG signal has not been widely studied. Use of deep learning in EEG applications is increasing in popularity due to the ability to learn EEG data without rich data pre-processing. In contrast to typical classification models, with the input -perturbation network prediction technique used here, we visualized the learned features from the trained model in terms of the correlation between the change in input frequency and the change in network prediction to better understand the features the model used for decision making. In this study, we recorded EEG signals from three subjects who were asked to perform a sound imagery task. In the first phase, subjects were asked to listen to and remember a generated sound; in the second phase, subjects were asked to imagine a sound of the same pitch. One-fourth of trials had no sound generation; EEG signals were labeled with the no imagery class. EEG signals from the remaining trials were labeled with the sound imagery class for model training. The best accuracy of 71.41% was obtained by the shallow model for subject 1, and an average accuracy of 61.00% was achieved between subjects. The model’s decision to classify EEG data into the sound imagery class was based on decreases in the delta, theta, and low beta bands in the frontal lobe and corresponding increases in the in the right temporal lobe of the brain.
基于卷积神经网络和输入-扰动网络预测技术的脑电声图像分析
在过去的几十年里,声音图像已经通过功能磁共振成像、PET、MEG或tDCS等技术进行了研究。然而,脑电信号中的声象现象尚未得到广泛的研究。由于能够在没有丰富数据预处理的情况下学习EEG数据,因此在EEG应用中使用深度学习越来越受欢迎。与典型的分类模型不同,我们使用输入-扰动网络预测技术,根据输入频率变化与网络预测变化之间的相关性,将训练模型的学习特征可视化,以更好地理解模型用于决策的特征。在这项研究中,我们记录了三名受试者的脑电图信号,他们被要求执行一项声音想象任务。在第一阶段,受试者被要求听并记住一个生成的声音;在第二阶段,受试者被要求想象一个相同音高的声音。四分之一的试验没有声音产生;将脑电信号标记为无图像类。其余实验的脑电图信号被标记为声音图像类用于模型训练。受试者1的浅层模型准确率最高,为71.41%,受试者间平均准确率为61.00%。该模型将脑电图数据分类为声音图像类别的决定是基于额叶的δ、θ和低β波段的减少以及相应的右颞叶的增加。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信