看到一个单词最初的发音姿势会触发词汇访问

Mathilde Fort, S. Kandel, Justine Chipot, C. Savariaux, L. Granjon, E. Spinelli
{"title":"看到一个单词最初的发音姿势会触发词汇访问","authors":"Mathilde Fort, S. Kandel, Justine Chipot, C. Savariaux, L. Granjon, E. Spinelli","doi":"10.1080/01690965.2012.701758","DOIUrl":null,"url":null,"abstract":"When the auditory information is deteriorated by noise in a conversation, watching the face of a speaker enhances speech intelligibility. Recent findings indicate that decoding the facial movements of a speaker accelerates word recognition. The objective of this study was to provide evidence that the mere presentation of the first two phonemes—that is, the articulatory gestures of the initial syllable—is enough visual information to activate a lexical unit and initiate the lexical access process. We used a priming paradigm combined with a lexical decision task. The primes were syllables that either shared the initial syllable with an auditory target or not. In Experiment 1, the primes were displayed in audiovisual, auditory-only or visual-only conditions. There was a priming effect in all conditions. Experiment 2 investigated the locus (prelexical vs. lexical or postlexical) of the facilitation effect observed in the visual-only condition by manipulating the target's word frequency. The facilitation produced by the visual prime was significant for low-frequency words but not for high-frequency words, indicating that the locus of the effect is not prelexical. This suggests that visual speech mostly contributes to the word recognition process when lexical access is difficult.","PeriodicalId":87410,"journal":{"name":"Language and cognitive processes","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01690965.2012.701758","citationCount":"36","resultStr":"{\"title\":\"Seeing the initial articulatory gestures of a word triggers lexical access\",\"authors\":\"Mathilde Fort, S. Kandel, Justine Chipot, C. Savariaux, L. Granjon, E. Spinelli\",\"doi\":\"10.1080/01690965.2012.701758\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"When the auditory information is deteriorated by noise in a conversation, watching the face of a speaker enhances speech intelligibility. Recent findings indicate that decoding the facial movements of a speaker accelerates word recognition. The objective of this study was to provide evidence that the mere presentation of the first two phonemes—that is, the articulatory gestures of the initial syllable—is enough visual information to activate a lexical unit and initiate the lexical access process. We used a priming paradigm combined with a lexical decision task. The primes were syllables that either shared the initial syllable with an auditory target or not. In Experiment 1, the primes were displayed in audiovisual, auditory-only or visual-only conditions. There was a priming effect in all conditions. Experiment 2 investigated the locus (prelexical vs. lexical or postlexical) of the facilitation effect observed in the visual-only condition by manipulating the target's word frequency. The facilitation produced by the visual prime was significant for low-frequency words but not for high-frequency words, indicating that the locus of the effect is not prelexical. This suggests that visual speech mostly contributes to the word recognition process when lexical access is difficult.\",\"PeriodicalId\":87410,\"journal\":{\"name\":\"Language and cognitive processes\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/01690965.2012.701758\",\"citationCount\":\"36\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Language and cognitive processes\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/01690965.2012.701758\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language and cognitive processes","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/01690965.2012.701758","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 36

摘要

当谈话中的噪音使听觉信息变差时,观察说话人的脸可以提高语音的清晰度。最近的研究表明,解码说话人的面部动作可以加速单词识别。本研究的目的是提供证据,证明仅仅是前两个音素的呈现,即最初音节的发音手势,就足以激活一个词汇单元并启动词汇获取过程。我们使用了一个与词汇决策任务相结合的启动范式。启动词是音节,这些音节要么与听觉目标共享第一个音节,要么不共享。在实验1中,启动词分别在视听、纯听觉和纯视觉条件下显示。在所有条件下都有启动效应。实验二考察了单纯视觉条件下易化效应的位点(词汇前、词汇前、词汇后)。视觉启动词对低频词的促进作用显著,对高频词的促进作用不显著,说明视觉启动词的促进作用位点不是词汇前的。这表明,当词汇获取困难时,视觉语言主要有助于单词识别过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Seeing the initial articulatory gestures of a word triggers lexical access
When the auditory information is deteriorated by noise in a conversation, watching the face of a speaker enhances speech intelligibility. Recent findings indicate that decoding the facial movements of a speaker accelerates word recognition. The objective of this study was to provide evidence that the mere presentation of the first two phonemes—that is, the articulatory gestures of the initial syllable—is enough visual information to activate a lexical unit and initiate the lexical access process. We used a priming paradigm combined with a lexical decision task. The primes were syllables that either shared the initial syllable with an auditory target or not. In Experiment 1, the primes were displayed in audiovisual, auditory-only or visual-only conditions. There was a priming effect in all conditions. Experiment 2 investigated the locus (prelexical vs. lexical or postlexical) of the facilitation effect observed in the visual-only condition by manipulating the target's word frequency. The facilitation produced by the visual prime was significant for low-frequency words but not for high-frequency words, indicating that the locus of the effect is not prelexical. This suggests that visual speech mostly contributes to the word recognition process when lexical access is difficult.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信