An invisible speaker can facilitate auditory speech perception

M. Grabowecky, Emmanuel Guzman-Martinez, L. Ortega, Satoru Suzuki
{"title":"An invisible speaker can facilitate auditory speech perception","authors":"M. Grabowecky, Emmanuel Guzman-Martinez, L. Ortega, Satoru Suzuki","doi":"10.1163/187847612X647801","DOIUrl":null,"url":null,"abstract":"Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"148-148"},"PeriodicalIF":0.0000,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647801","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seeing and Perceiving","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1163/187847612X647801","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.
隐形说话者可以促进听觉言语感知
当口腔受到照顾时,观察嘴唇的运动有助于听觉语言感知。然而,最近的证据表明,视觉注意和意识是由不同的机制介导的。我们研究了被视觉意识抑制的嘴唇运动是否能促进语言感知。我们使用了一个单词分类任务,在这个任务中,参与者听着口语单词,并尽可能快速准确地确定每个单词是否代表一种工具。当参与者听单词时,他们观看了一个视觉显示,显示了说话者同步说出所听单词的视频剪辑,或者同一说话者发音不同的单词。关键的是,说话者的脸要么是可见的(有意识的试验),要么是通过持续的闪光抑制来抑制意识。有意识和抑制试验随机混合。第二个探针探测任务确保参与者关注嘴巴区域,而不管脸是可见的还是被抑制的。在有意识的实验中,同步嘴唇运动对工具目标的反应并不比非同步嘴唇运动更快,这可能是因为50%的实验中视觉信息与听觉信息不一致。然而,在抑制试验中,同步唇运动对工具目标的反应明显快于非同步唇运动。这些结果表明,即使随机动态面具使人脸不可见,嘴唇运动也会被视觉系统以足够高的时间分辨率处理,以促进语音感知。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Seeing and Perceiving
Seeing and Perceiving BIOPHYSICS-PSYCHOLOGY
自引率
0.00%
发文量
0
审稿时长
>12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信