词汇重音的视听感知:节拍手势和发音线索。

IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY
Ronny Bujok, Antje S Meyer, Hans Rutger Bosker
{"title":"词汇重音的视听感知:节拍手势和发音线索。","authors":"Ronny Bujok, Antje S Meyer, Hans Rutger Bosker","doi":"10.1177/00238309241258162","DOIUrl":null,"url":null,"abstract":"<p><p>Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating <i>VOORnaam</i> or <i>voorNAAM</i>). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309241258162"},"PeriodicalIF":1.1000,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Audiovisual Perception of Lexical Stress: Beat Gestures and Articulatory Cues.\",\"authors\":\"Ronny Bujok, Antje S Meyer, Hans Rutger Bosker\",\"doi\":\"10.1177/00238309241258162\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating <i>VOORnaam</i> or <i>voorNAAM</i>). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.</p>\",\"PeriodicalId\":51255,\"journal\":{\"name\":\"Language and Speech\",\"volume\":\" \",\"pages\":\"238309241258162\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2024-06-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Language and Speech\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1177/00238309241258162\",\"RegionNum\":2,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language and Speech","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/00238309241258162","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

人类交流本身就是多模态的。听觉语言和视觉线索都可以用来理解另一个说话者。大多数关于视听语音感知的研究都集中在对语音片段(即语音)的感知上。然而,人们对视觉信息对词汇重音等语音超片段感知的影响却知之甚少。在两项实验中,我们研究了不同视觉线索(如面部发音线索和节拍手势)对词汇重音视听感知的影响。我们将双音节荷兰语重音对的听觉词汇重音连续音与说话者在第一个或第二个音节上发出重音的视频(例如,发音 VOORnaam 或 voorNAAM)一起呈现。此外,我们还将在任一音节上发出词性重音的说话者的脸部与在第一或第二个音节上发出节拍手势的肢体结合起来,并将其完全交叉。结果表明,在静音视频中,人们成功地使用了视觉发音线索来表示重音。然而,在视听条件下,我们未能发现视觉发音线索的影响。与此相反,我们发现节拍手势与语音的时间一致性有力地影响了参与者对词汇重音的感知。这些结果凸显了在多模态语境中考虑语言超语段方面的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Audiovisual Perception of Lexical Stress: Beat Gestures and Articulatory Cues.

Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Language and Speech
Language and Speech AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY-
CiteScore
4.00
自引率
5.60%
发文量
39
审稿时长
>12 weeks
期刊介绍: Language and Speech is a peer-reviewed journal which provides an international forum for communication among researchers in the disciplines that contribute to our understanding of the production, perception, processing, learning, use, and disorders of speech and language. The journal accepts reports of original research in all these areas.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信