Lefteris Themelis Zografos, Anna Konstantoulaki, Christoph Klein, Argiro Vatakis, Nikolaos Smyrnis
{"title":"Audiovisual integration of speech: evidence for increased accuracy in \"talk\" versus \"listen\" condition.","authors":"Lefteris Themelis Zografos, Anna Konstantoulaki, Christoph Klein, Argiro Vatakis, Nikolaos Smyrnis","doi":"10.1007/s00221-025-07088-7","DOIUrl":null,"url":null,"abstract":"<p><p>Processing of sensory stimuli generated by our own actions differs from that of externally generated stimuli. However, most evidence regarding this phenomenon concerns the processing of unisensory stimuli. A few studies have explored the effect of self-generated actions on multisensory stimuli and how it affects the integration of these stimuli. Most of them used abstract stimuli (e.g., flashes, beeps) rather than more natural ones such as sensations that are commonly correlated with actions that we perform in our everyday lives such as speech. In the current study, we explored the effect of self-generated action on the process of multisensory integration (MSI) during speech. We used a novel paradigm where participants were either listening to the echo of their own speech, while watching a video of themselves producing the same speech (\"talk\", active condition), or they listened to their previously recorded speech and watched the prerecorded video of themselves producing the same speech (\"listen\", passive condition). In both conditions, different stimulus onset asynchronies were introduced between the auditory and visual streams and participants were asked to perform simultaneity judgments. Using these judgments, we determined temporal binding windows (TBW) of integration for each participant and condition. We found that the TBW was significantly smaller in the active as compared to the passive condition indicating more accurate MSI. These results support the conclusion that sensory perception is modulated by self-generated action at the multisensory in addition to the unisensory level.</p>","PeriodicalId":12268,"journal":{"name":"Experimental Brain Research","volume":"243 6","pages":"154"},"PeriodicalIF":1.7000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12106506/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Experimental Brain Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00221-025-07088-7","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Processing of sensory stimuli generated by our own actions differs from that of externally generated stimuli. However, most evidence regarding this phenomenon concerns the processing of unisensory stimuli. A few studies have explored the effect of self-generated actions on multisensory stimuli and how it affects the integration of these stimuli. Most of them used abstract stimuli (e.g., flashes, beeps) rather than more natural ones such as sensations that are commonly correlated with actions that we perform in our everyday lives such as speech. In the current study, we explored the effect of self-generated action on the process of multisensory integration (MSI) during speech. We used a novel paradigm where participants were either listening to the echo of their own speech, while watching a video of themselves producing the same speech ("talk", active condition), or they listened to their previously recorded speech and watched the prerecorded video of themselves producing the same speech ("listen", passive condition). In both conditions, different stimulus onset asynchronies were introduced between the auditory and visual streams and participants were asked to perform simultaneity judgments. Using these judgments, we determined temporal binding windows (TBW) of integration for each participant and condition. We found that the TBW was significantly smaller in the active as compared to the passive condition indicating more accurate MSI. These results support the conclusion that sensory perception is modulated by self-generated action at the multisensory in addition to the unisensory level.
期刊介绍:
Founded in 1966, Experimental Brain Research publishes original contributions on many aspects of experimental research of the central and peripheral nervous system. The focus is on molecular, physiology, behavior, neurochemistry, developmental, cellular and molecular neurobiology, and experimental pathology relevant to general problems of cerebral function. The journal publishes original papers, reviews, and mini-reviews.