Speech mode classification from electrocorticography: transfer between electrodes and participants.

IF 3.8
Aurélie de Borman, Benjamin Wittevrongel, Bob Van Dyck, Kato Van Rooy, Evelien Carrette, Alfred Meurs, Dirk Van Roost, Marc M Van Hulle
{"title":"Speech mode classification from electrocorticography: transfer between electrodes and participants.","authors":"Aurélie de Borman, Benjamin Wittevrongel, Bob Van Dyck, Kato Van Rooy, Evelien Carrette, Alfred Meurs, Dirk Van Roost, Marc M Van Hulle","doi":"10.1088/1741-2552/adf2de","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>Speech brain-computer interfaces (BCIs) aim to restore communication for individuals who have lost the ability to speak by interpreting their brain activity and decoding the intended speech. As an initial component of these decoders, speech detectors have been developed to distinguish between the intent to speak and silence. However, it is important that these detectors account for real-life scenarios in which users may engage language-related brain areas-such as during reading or listening-without any intention to speak.<i>Approach.</i>In this study, we analyze the interplay between different speech modes: speaking, listening, imagining speaking, reading and mouthing. We gathered a large dataset of 29 participants implanted with electrocorticography electrodes and developed a speech mode classifier. We also assessed how well classifiers trained on data from a specific participant transfer to other participants, both in the case of a single- and multi-electrode classifier.<i>Main results.</i>High accuracy was achieved using linear classifiers, for both single-electrode and multi-electrode configurations. Single-electrode classification reached 88.89% accuracy and multi-electrode classification 96.49% accuracy in distinguishing among three classes (speaking, listening, and silence). The best performing electrodes were located on the superior temporal gyrus and sensorimotor cortex. We found that single-electrode classifiers could be transferred across recording sites. For multi-electrode classifiers, we observed that transfer performance was higher for binary classifiers compared to multiclass classifiers, with the optimal source subject of the binary classifiers depending on the speech modes being classified.<i>Significance</i>Accurately detecting speech from brain signals is essential to prevent spurious outputs from a speech BCI and to advance its use beyond lab settings. To achieve this objective, the transfer between participants is particularly valuable as it can reduce training time, especially in cases where subject training is challenging.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8000,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/adf2de","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Objective.Speech brain-computer interfaces (BCIs) aim to restore communication for individuals who have lost the ability to speak by interpreting their brain activity and decoding the intended speech. As an initial component of these decoders, speech detectors have been developed to distinguish between the intent to speak and silence. However, it is important that these detectors account for real-life scenarios in which users may engage language-related brain areas-such as during reading or listening-without any intention to speak.Approach.In this study, we analyze the interplay between different speech modes: speaking, listening, imagining speaking, reading and mouthing. We gathered a large dataset of 29 participants implanted with electrocorticography electrodes and developed a speech mode classifier. We also assessed how well classifiers trained on data from a specific participant transfer to other participants, both in the case of a single- and multi-electrode classifier.Main results.High accuracy was achieved using linear classifiers, for both single-electrode and multi-electrode configurations. Single-electrode classification reached 88.89% accuracy and multi-electrode classification 96.49% accuracy in distinguishing among three classes (speaking, listening, and silence). The best performing electrodes were located on the superior temporal gyrus and sensorimotor cortex. We found that single-electrode classifiers could be transferred across recording sites. For multi-electrode classifiers, we observed that transfer performance was higher for binary classifiers compared to multiclass classifiers, with the optimal source subject of the binary classifiers depending on the speech modes being classified.SignificanceAccurately detecting speech from brain signals is essential to prevent spurious outputs from a speech BCI and to advance its use beyond lab settings. To achieve this objective, the transfer between participants is particularly valuable as it can reduce training time, especially in cases where subject training is challenging.

脑皮层电图语音模式分类:电极与参与者之间的传递。
目的语音脑机接口旨在通过解释大脑活动和解码预期的语言,为失去说话能力的个体恢复沟通。作为这些解码器的初始组成部分,语音检测器已经被开发用于区分说话的意图和沉默。然而,重要的是,这些检测器考虑到现实生活中的场景,在这些场景中,用户可能会参与与语言相关的大脑区域,例如在阅读或听时,而没有任何意图说话。方法在本研究中,我们分析了不同语言模式之间的相互作用:说、听、想象说话、阅读和口述。我们收集了29名参与者的大型数据集,植入了皮质电成像电极,并开发了语音模式分类器。我们还评估了在单电极和多电极分类器的情况下,对来自特定参与者的数据进行训练的分类器如何很好地转移到其他参与者。主要结果:对于单电极和多电极配置,使用线性分类器都实现了高精度。单电极分类对说、听、沉默三个类别的区分准确率为88.89%,多电极分类准确率为96.49%。表现最好的电极位于颞上回和感觉运动皮层。我们发现单电极分类器可以跨记录位点转移。对于多电极分类器,我们观察到,与多电极分类器相比,二元分类器的传输性能更高,二元分类器的最佳源主题取决于被分类的语音模式。 ;意义准确地从大脑信号中检测语音对于防止语音脑机接口的虚假输出至关重要,并将其应用于实验室环境之外。为了实现这一目标,参与者之间的转移特别有价值,因为它可以减少培训时间,特别是在主题培训具有挑战性的情况下。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信