Audiovisual integration of speech: evidence for increased accuracy in "talk" versus "listen" condition.

IF 1.7 4区 医学 Q4 NEUROSCIENCES
Lefteris Themelis Zografos, Anna Konstantoulaki, Christoph Klein, Argiro Vatakis, Nikolaos Smyrnis
{"title":"Audiovisual integration of speech: evidence for increased accuracy in \"talk\" versus \"listen\" condition.","authors":"Lefteris Themelis Zografos, Anna Konstantoulaki, Christoph Klein, Argiro Vatakis, Nikolaos Smyrnis","doi":"10.1007/s00221-025-07088-7","DOIUrl":null,"url":null,"abstract":"<p><p>Processing of sensory stimuli generated by our own actions differs from that of externally generated stimuli. However, most evidence regarding this phenomenon concerns the processing of unisensory stimuli. A few studies have explored the effect of self-generated actions on multisensory stimuli and how it affects the integration of these stimuli. Most of them used abstract stimuli (e.g., flashes, beeps) rather than more natural ones such as sensations that are commonly correlated with actions that we perform in our everyday lives such as speech. In the current study, we explored the effect of self-generated action on the process of multisensory integration (MSI) during speech. We used a novel paradigm where participants were either listening to the echo of their own speech, while watching a video of themselves producing the same speech (\"talk\", active condition), or they listened to their previously recorded speech and watched the prerecorded video of themselves producing the same speech (\"listen\", passive condition). In both conditions, different stimulus onset asynchronies were introduced between the auditory and visual streams and participants were asked to perform simultaneity judgments. Using these judgments, we determined temporal binding windows (TBW) of integration for each participant and condition. We found that the TBW was significantly smaller in the active as compared to the passive condition indicating more accurate MSI. These results support the conclusion that sensory perception is modulated by self-generated action at the multisensory in addition to the unisensory level.</p>","PeriodicalId":12268,"journal":{"name":"Experimental Brain Research","volume":"243 6","pages":"154"},"PeriodicalIF":1.7000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12106506/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Experimental Brain Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00221-025-07088-7","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Processing of sensory stimuli generated by our own actions differs from that of externally generated stimuli. However, most evidence regarding this phenomenon concerns the processing of unisensory stimuli. A few studies have explored the effect of self-generated actions on multisensory stimuli and how it affects the integration of these stimuli. Most of them used abstract stimuli (e.g., flashes, beeps) rather than more natural ones such as sensations that are commonly correlated with actions that we perform in our everyday lives such as speech. In the current study, we explored the effect of self-generated action on the process of multisensory integration (MSI) during speech. We used a novel paradigm where participants were either listening to the echo of their own speech, while watching a video of themselves producing the same speech ("talk", active condition), or they listened to their previously recorded speech and watched the prerecorded video of themselves producing the same speech ("listen", passive condition). In both conditions, different stimulus onset asynchronies were introduced between the auditory and visual streams and participants were asked to perform simultaneity judgments. Using these judgments, we determined temporal binding windows (TBW) of integration for each participant and condition. We found that the TBW was significantly smaller in the active as compared to the passive condition indicating more accurate MSI. These results support the conclusion that sensory perception is modulated by self-generated action at the multisensory in addition to the unisensory level.

言语的视听整合:“说”与“听”条件下准确性提高的证据。
对我们自身行为产生的感官刺激的处理不同于对外界产生的刺激的处理。然而,关于这一现象的大多数证据都涉及对感官刺激的处理。一些研究探讨了自生行为对多感觉刺激的影响及其如何影响这些刺激的整合。他们中的大多数人使用抽象的刺激(如闪光、哔哔声),而不是更自然的刺激,如通常与我们在日常生活中所做的动作(如说话)相关的感觉。在本研究中,我们探讨了自生动作对言语过程中多感觉整合(MSI)过程的影响。我们使用了一种新颖的范例,参与者要么听自己演讲的回声,同时观看自己发表同样演讲的视频(“说话”,主动条件),要么听自己之前录制的演讲,并观看自己发表同样演讲的预先录制的视频(“倾听”,被动条件)。在这两种情况下,在听觉和视觉流之间引入不同的刺激启动异步,并要求参与者进行同时性判断。利用这些判断,我们确定了每个参与者和条件的整合时间绑定窗口(TBW)。我们发现,与被动状态相比,主动状态下的TBW明显更小,这表明MSI更准确。这些结果支持了感觉知觉除在单感觉水平外,还在多感觉水平受自生动作调节的结论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.60
自引率
5.00%
发文量
228
审稿时长
1 months
期刊介绍: Founded in 1966, Experimental Brain Research publishes original contributions on many aspects of experimental research of the central and peripheral nervous system. The focus is on molecular, physiology, behavior, neurochemistry, developmental, cellular and molecular neurobiology, and experimental pathology relevant to general problems of cerebral function. The journal publishes original papers, reviews, and mini-reviews.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信