Subject Generalization in Classifying Imagined and Spoken Speech with MEG

Debadatta Dash, P. Ferrari, A. Babajani-Feremi, David F. Harwath, A. Borna, Jun Wang
{"title":"Subject Generalization in Classifying Imagined and Spoken Speech with MEG","authors":"Debadatta Dash, P. Ferrari, A. Babajani-Feremi, David F. Harwath, A. Borna, Jun Wang","doi":"10.1109/NER52421.2023.10123722","DOIUrl":null,"url":null,"abstract":"Speech decoding-based brain-computer interfaces (Speech-BCIs) decode speech directly from brain signals, which have the potential to offer faster and natural communication to patients with locked-in syndrome than the current BCI-spellers. On account of the huge cognitive variance among subjects, most of the current speech-BCI models have focused on subject-dependent decoding where the training and evaluation of the decoding algorithms use data from the same participants. These models do not generalize across individuals and, thus, are limited by the small data size that can be obtained from a single participant. Few studies have attempted subject-independent decoding but the performances are sub-par at best and significantly lower than subject-dependent models. To address this issue, we evaluated imagined and overt speech decoding with magnetoencephalography (MEG) recordings of eight speakers in a generalizable subject-independent setting. We used recent domain adaptation techniques including feature augmentation and curriculum learning to introduce generalizability to the decoding model. Our results indicated that domain adaptation techniques can be efficient in subject-independent decoding. The best performance was obtained with a curriculum learning based adaptation technique that resulted in decoding accuracy was close to that in subject-dependent decoding. Our findings show the possibility of subject generalization in neural speech decoding.","PeriodicalId":201841,"journal":{"name":"2023 11th International IEEE/EMBS Conference on Neural Engineering (NER)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 11th International IEEE/EMBS Conference on Neural Engineering (NER)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NER52421.2023.10123722","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Speech decoding-based brain-computer interfaces (Speech-BCIs) decode speech directly from brain signals, which have the potential to offer faster and natural communication to patients with locked-in syndrome than the current BCI-spellers. On account of the huge cognitive variance among subjects, most of the current speech-BCI models have focused on subject-dependent decoding where the training and evaluation of the decoding algorithms use data from the same participants. These models do not generalize across individuals and, thus, are limited by the small data size that can be obtained from a single participant. Few studies have attempted subject-independent decoding but the performances are sub-par at best and significantly lower than subject-dependent models. To address this issue, we evaluated imagined and overt speech decoding with magnetoencephalography (MEG) recordings of eight speakers in a generalizable subject-independent setting. We used recent domain adaptation techniques including feature augmentation and curriculum learning to introduce generalizability to the decoding model. Our results indicated that domain adaptation techniques can be efficient in subject-independent decoding. The best performance was obtained with a curriculum learning based adaptation technique that resulted in decoding accuracy was close to that in subject-dependent decoding. Our findings show the possibility of subject generalization in neural speech decoding.
用脑磁图对想象言语和口语进行分类的主语泛化
基于语音解码的脑机接口(Speech- bci)直接从大脑信号中解码语音,与目前的bci拼写器相比,它有可能为闭锁综合征患者提供更快、更自然的交流。由于受试者之间存在巨大的认知差异,目前大多数语音脑机接口模型都侧重于受试者依赖解码,其中解码算法的训练和评估使用来自同一参与者的数据。这些模型不能泛化到所有个体,因此受到单个参与者的小数据量的限制。很少有研究尝试独立于主体的解码,但其表现充其量是低于标准,明显低于主体依赖模型。为了解决这个问题,我们用脑磁图(MEG)记录了八个说话者在一个可概括的独立于主体的环境中想象和公开的语音解码。我们使用了最新的领域自适应技术,包括特征增强和课程学习来引入解码模型的泛化性。研究结果表明,领域自适应技术可以有效地实现与主题无关的译码。以课程学习为基础的自适应技术的译码精度与学科相关译码的译码精度相近,取得了较好的译码效果。我们的发现显示了在神经语音解码中主体泛化的可能性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信