Brain Dynamics of Speech Modes Encoding: Loud and Whispered Speech Versus Standard Speech.

IF 2.3 3区 医学 Q3 CLINICAL NEUROLOGY
Bryan Sanders, Monica Lancheros, Marion Bourqui, Marina Laganaro
{"title":"Brain Dynamics of Speech Modes Encoding: Loud and Whispered Speech Versus Standard Speech.","authors":"Bryan Sanders, Monica Lancheros, Marion Bourqui, Marina Laganaro","doi":"10.1007/s10548-025-01108-z","DOIUrl":null,"url":null,"abstract":"<p><p>Loud speech and whispered speech are two distinct speech modes that are part of daily verbal exchanges, but that involve a different employment of the speech apparatus. However, a clear account of whether and when the motor speech (or phonetic) encoding of these speech modes differs from standard speech has not been provided yet. Here, we addressed this question using Electroencephalography (EEG)/Event related potential (ERP) approaches during a delayed production task to contrast the production of speech sequences (pseudowords) when speaking normally or under a specific speech mode: loud speech in experiment 1 and whispered speech in experiment 2. Behavioral results demonstrated that non-standard speech modes entail a behavioral encoding cost in terms of production latency. Standard speech and speech modes' ERPs were characterized by the same sequence of microstate maps, suggesting that the same brain processes are involved to produce speech under a specific speech mode. Only loud speech entailed electrophysiological modulations relative to standard speech in terms of waveform amplitudes but also temporal distribution and strength of neural recruitment of the same sequence of microstates during a large time window (from approximatively - 220 ms to - 100 ms) preceding the vocal onset. Alternatively, the electrophysiological activity of whispered speech was similar in nature to standard speech. On the whole, speech modes and standard speech seem to be encoded through the same brain processes but the degree of adjustments required seem to vary subsequently across speech modes.</p>","PeriodicalId":55329,"journal":{"name":"Brain Topography","volume":"38 2","pages":"31"},"PeriodicalIF":2.3000,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11829918/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain Topography","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10548-025-01108-z","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Loud speech and whispered speech are two distinct speech modes that are part of daily verbal exchanges, but that involve a different employment of the speech apparatus. However, a clear account of whether and when the motor speech (or phonetic) encoding of these speech modes differs from standard speech has not been provided yet. Here, we addressed this question using Electroencephalography (EEG)/Event related potential (ERP) approaches during a delayed production task to contrast the production of speech sequences (pseudowords) when speaking normally or under a specific speech mode: loud speech in experiment 1 and whispered speech in experiment 2. Behavioral results demonstrated that non-standard speech modes entail a behavioral encoding cost in terms of production latency. Standard speech and speech modes' ERPs were characterized by the same sequence of microstate maps, suggesting that the same brain processes are involved to produce speech under a specific speech mode. Only loud speech entailed electrophysiological modulations relative to standard speech in terms of waveform amplitudes but also temporal distribution and strength of neural recruitment of the same sequence of microstates during a large time window (from approximatively - 220 ms to - 100 ms) preceding the vocal onset. Alternatively, the electrophysiological activity of whispered speech was similar in nature to standard speech. On the whole, speech modes and standard speech seem to be encoded through the same brain processes but the degree of adjustments required seem to vary subsequently across speech modes.

语音模式编码的脑动力学:大声和低声语音与标准语音。
大声说话和低声说话是两种不同的语言模式,它们是日常语言交流的一部分,但它们涉及到不同的语言器官的使用。然而,这些语音模式的运动语音(或语音)编码是否以及何时与标准语音不同,目前还没有明确的解释。在这里,我们在延迟生成任务中使用脑电图(EEG)/事件相关电位(ERP)方法来解决这个问题,以对比正常说话或在特定语音模式下(实验1中的大声说话和实验2中的低声说话)产生的语音序列(假词)。行为结果表明,非标准语音模式在产生延迟方面需要行为编码成本。标准语音和语音模式的erp具有相同的微状态图序列,这表明在特定的语音模式下,语音的产生涉及相同的大脑过程。只有大声说话才需要相对于标准说话的电生理调制,在波形幅度方面,但在发声前的一个大时间窗口(从大约- 220毫秒到- 100毫秒)内,相同的微状态序列的时间分布和神经募集强度也需要电生理调制。另外,低声说话的电生理活动在性质上与标准说话相似。总的来说,语音模式和标准语音似乎是通过相同的大脑过程编码的,但所需的调整程度似乎因语音模式而异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Brain Topography
Brain Topography 医学-临床神经学
CiteScore
4.70
自引率
7.40%
发文量
41
审稿时长
3 months
期刊介绍: Brain Topography publishes clinical and basic research on cognitive neuroscience and functional neurophysiology using the full range of imaging techniques including EEG, MEG, fMRI, TMS, diffusion imaging, spectroscopy, intracranial recordings, lesion studies, and related methods. Submissions combining multiple techniques are particularly encouraged, as well as reports of new and innovative methodologies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信