解码来自人类皮层的显性和隐性语言的光谱时间特征。

Frontiers in neuroengineering Pub Date : 2014-05-27 eCollection Date: 2014-01-01 DOI:10.3389/fneng.2014.00014
Stéphanie Martin, Peter Brunner, Chris Holdgraf, Hans-Jochen Heinze, Nathan E Crone, Jochem Rieger, Gerwin Schalk, Robert T Knight, Brian N Pasley
{"title":"解码来自人类皮层的显性和隐性语言的光谱时间特征。","authors":"Stéphanie Martin,&nbsp;Peter Brunner,&nbsp;Chris Holdgraf,&nbsp;Hans-Jochen Heinze,&nbsp;Nathan E Crone,&nbsp;Jochem Rieger,&nbsp;Gerwin Schalk,&nbsp;Robert T Knight,&nbsp;Brian N Pasley","doi":"10.3389/fneng.2014.00014","DOIUrl":null,"url":null,"abstract":"<p><p>Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70-150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10(-5); paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate. </p>","PeriodicalId":73093,"journal":{"name":"Frontiers in neuroengineering","volume":" ","pages":"14"},"PeriodicalIF":0.0000,"publicationDate":"2014-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3389/fneng.2014.00014","citationCount":"170","resultStr":"{\"title\":\"Decoding spectrotemporal features of overt and covert speech from the human cortex.\",\"authors\":\"Stéphanie Martin,&nbsp;Peter Brunner,&nbsp;Chris Holdgraf,&nbsp;Hans-Jochen Heinze,&nbsp;Nathan E Crone,&nbsp;Jochem Rieger,&nbsp;Gerwin Schalk,&nbsp;Robert T Knight,&nbsp;Brian N Pasley\",\"doi\":\"10.3389/fneng.2014.00014\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70-150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10(-5); paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate. </p>\",\"PeriodicalId\":73093,\"journal\":{\"name\":\"Frontiers in neuroengineering\",\"volume\":\" \",\"pages\":\"14\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-05-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.3389/fneng.2014.00014\",\"citationCount\":\"170\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in neuroengineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fneng.2014.00014\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2014/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in neuroengineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fneng.2014.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2014/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 170

摘要

听觉感知和听觉意象已被证明可以激活重叠的大脑区域。我们假设这些现象也有一个共同的潜在神经表征。为了评估这一点,我们使用了癫痫患者进行大声或沉默阅读任务的脑皮质电成像颅内记录。在这些任务中,在两种情况下,短故事在视频屏幕上滚动:受试者大声(公开)和无声(隐蔽)地朗读相同的故事。在控制条件下,受试者保持静息状态。我们首先建立了一个高伽马(70-150 Hz)的神经解码模型来重建自生成显性言语的分频听觉特征。然后,我们评估了相同的模型是否可以在隐蔽语音条件下重建听觉语音特征。测试了两种语音模型:频谱图和基于调制的特征空间。对于显性条件,重建的准确性评估为原始语音特征和预测语音特征之间的相关性,并且在每个受试者中都是显著的(p < 10(-5);配对双样本t检验)。对于隐蔽语音条件,首先使用动态时间翘曲将隐蔽语音重构与明显条件下对应的原始语音重新对齐;然后用原始语音特征和重建语音特征之间的相关性来评估重建精度。将隐蔽重建精度与基线控制条件下重建精度进行比较。隐蔽条件下的重建精度显著优于对照条件(p < 0.005;配对双样本t检验)。颞上回、中央前回和中央后回提供了最高的重建信息。显性和隐性言语重建之间的关系取决于解剖学。这些结果提供了证据,表明隐性语音的听觉表征可以从显性语音数据集构建的模型中重建,支持部分共享的神经基质。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Decoding spectrotemporal features of overt and covert speech from the human cortex.

Decoding spectrotemporal features of overt and covert speech from the human cortex.

Decoding spectrotemporal features of overt and covert speech from the human cortex.

Decoding spectrotemporal features of overt and covert speech from the human cortex.

Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70-150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10(-5); paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信