Neural bases of proactive and predictive processing of meaningful sub-word units in speech comprehension.

IF 4.4 2区 医学 Q1 NEUROSCIENCES
Suhail Matar, Alec Marantz
{"title":"Neural bases of proactive and predictive processing of meaningful sub-word units in speech comprehension.","authors":"Suhail Matar, Alec Marantz","doi":"10.1523/JNEUROSCI.0781-24.2024","DOIUrl":null,"url":null,"abstract":"<p><p>To comprehend speech, human brains identify meaningful units in the speech stream. But whereas the English '<i>She believed him.</i>' has 3 word-units, the Arabic equivalent '<i>ṣaddaqathu.</i>' is a single word-unit with 3 meaningful sub-word units, called morphemes: a verb stem ('<i>ṣaddaqa</i>'), a subject suffix ('-<i>t</i>-'), and a direct object pronoun ('-<i>hu</i>'). It remains unclear whether and how the brain processes morphemes, above and beyond other language units, during speech comprehension. Here, we propose and test hierarchically-nested encoding models of speech comprehension: a naïve model with word-, syllable-, and sound-level information; a bottom-up model with additional morpheme boundary information; and predictive models that process morphemes before these boundaries. We recorded magnetoencephalography (MEG) data as 27 participants (16 female) listened to Arabic sentences like '<i>ṣaddaqathu.</i>'. A temporal response function (TRF) analysis revealed that in temporal and left inferior frontal regions predictive models outperform the bottom-up model, which outperforms the naïve model. Moreover, verb stems were either length-ambiguous (e.g., '<i>ṣaddaqa</i>' could initially be mistaken for the shorter stem '<i>ṣadda</i>'='<i>blocked</i>') or length-unambiguous (e.g., '<i>qayyama</i>'='<i>evaluated</i>' cannot be mistaken for a shorter stem), but shared a uniqueness point, beyond which stem identity is fully disambiguated. Evoked analyses revealed differences between conditions before the uniqueness point, suggesting that, rather than await disambiguation, the brain employs proactive predictive strategies, processing accumulated input as soon as any possible stem is identifiable, even if not uniquely. These findings highlight the role of morphemes in speech, and the importance of including morpheme-level information in neural and computational models of speech comprehension.<b>Significance statement</b> Many leading models of speech comprehension include information about words, syllables and sounds. But languages vary considerably in the amount of meaning packed into word units. This work proposes speech comprehension models with information about meaningful sub-word units, called morphemes (e.g., '<i>bake-</i>' and '<i>-ing</i>' in '<i>baking</i>'), and shows that they explain significantly more neural activity than models without morpheme information. We also show how the brain predictively processes morphemic information. These findings highlight the role of morphemes in speech comprehension and emphasize the contributions of morpheme-level information-theoretic metrics, like surprisal and entropy. Our findings can be used to update current neural, cognitive, and computational models of speech comprehension, and constitute a step towards refining those models for naturalistic, connected speech.</p>","PeriodicalId":50114,"journal":{"name":"Journal of Neuroscience","volume":" ","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1523/JNEUROSCI.0781-24.2024","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

To comprehend speech, human brains identify meaningful units in the speech stream. But whereas the English 'She believed him.' has 3 word-units, the Arabic equivalent 'ṣaddaqathu.' is a single word-unit with 3 meaningful sub-word units, called morphemes: a verb stem ('ṣaddaqa'), a subject suffix ('-t-'), and a direct object pronoun ('-hu'). It remains unclear whether and how the brain processes morphemes, above and beyond other language units, during speech comprehension. Here, we propose and test hierarchically-nested encoding models of speech comprehension: a naïve model with word-, syllable-, and sound-level information; a bottom-up model with additional morpheme boundary information; and predictive models that process morphemes before these boundaries. We recorded magnetoencephalography (MEG) data as 27 participants (16 female) listened to Arabic sentences like 'ṣaddaqathu.'. A temporal response function (TRF) analysis revealed that in temporal and left inferior frontal regions predictive models outperform the bottom-up model, which outperforms the naïve model. Moreover, verb stems were either length-ambiguous (e.g., 'ṣaddaqa' could initially be mistaken for the shorter stem 'ṣadda'='blocked') or length-unambiguous (e.g., 'qayyama'='evaluated' cannot be mistaken for a shorter stem), but shared a uniqueness point, beyond which stem identity is fully disambiguated. Evoked analyses revealed differences between conditions before the uniqueness point, suggesting that, rather than await disambiguation, the brain employs proactive predictive strategies, processing accumulated input as soon as any possible stem is identifiable, even if not uniquely. These findings highlight the role of morphemes in speech, and the importance of including morpheme-level information in neural and computational models of speech comprehension.Significance statement Many leading models of speech comprehension include information about words, syllables and sounds. But languages vary considerably in the amount of meaning packed into word units. This work proposes speech comprehension models with information about meaningful sub-word units, called morphemes (e.g., 'bake-' and '-ing' in 'baking'), and shows that they explain significantly more neural activity than models without morpheme information. We also show how the brain predictively processes morphemic information. These findings highlight the role of morphemes in speech comprehension and emphasize the contributions of morpheme-level information-theoretic metrics, like surprisal and entropy. Our findings can be used to update current neural, cognitive, and computational models of speech comprehension, and constitute a step towards refining those models for naturalistic, connected speech.

语音理解中对有意义子词单元的主动和预测处理的神经基础
为了理解语音,人脑需要识别语音流中有意义的单元。但是,英语 "She believed him. "有 3 个单词单位,而阿拉伯语对应的 "ṣaddaqathu. "是一个单词单位,包含 3 个有意义的子单词单位,称为语素:一个动词词干('ṣaddaqa')、一个主语后缀('-t-')和一个直接宾语代词('-hu')。目前仍不清楚在语音理解过程中,大脑是否以及如何在其他语言单位之外处理语素。在此,我们提出并测试了语音理解的分层嵌套编码模型:一个包含单词、音节和声音级信息的天真模型;一个包含额外语素边界信息的自下而上模型;以及在这些边界之前处理语素的预测模型。我们记录了 27 名参与者(16 名女性)聆听阿拉伯语句子 "ṣaddaqathu. "时的脑磁图(MEG)数据。时间反应函数(TRF)分析表明,在颞叶和左下额区,预测模型优于自下而上模型,而自下而上模型优于天真模型。此外,动词词干要么是长度模糊的(例如,'ṣaddaqa'最初可能被误认为是较短的词干'ṣadda'='blocked'),要么是长度不模糊的(例如,'qayyama'='evaluated'不会被误认为是较短的词干),但都有一个唯一性点,过了这个唯一性点,词干的身份就完全不模糊了。诱发分析表明,在唯一性点之前,不同条件之间存在差异,这表明大脑不是等待消歧,而是采用主动预测策略,一旦任何可能的词干可以识别,即使不是唯一的,也会立即处理累积的输入。这些发现凸显了语素在语音中的作用,以及在语音理解的神经和计算模型中包含语素级信息的重要性。但是,语言在单词单位中包含的意义量方面存在很大差异。这项研究提出了包含有意义的子单词单位信息的语音理解模型,这些单词单位被称为词素(例如 "烘焙 "中的 "bake-"和"-ing"),研究结果表明,与不包含词素信息的模型相比,这些模型能解释更多的神经活动。我们还展示了大脑是如何预测性地处理语素信息的。这些发现突出了语素在语音理解中的作用,并强调了语素级信息论度量(如惊奇和熵等)的贡献。我们的发现可用于更新当前语音理解的神经、认知和计算模型,并为完善这些模型以适应自然、连贯的语音迈出了一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Neuroscience
Journal of Neuroscience 医学-神经科学
CiteScore
9.30
自引率
3.80%
发文量
1164
审稿时长
12 months
期刊介绍: JNeurosci (ISSN 0270-6474) is an official journal of the Society for Neuroscience. It is published weekly by the Society, fifty weeks a year, one volume a year. JNeurosci publishes papers on a broad range of topics of general interest to those working on the nervous system. Authors now have an Open Choice option for their published articles
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信