{"title":"Sources of variance in the audiovisual perception of speech in noise","authors":"C. Nahanni, J. Deonarine, M. Paré, K. Munhall","doi":"10.1163/187847612X647568","DOIUrl":null,"url":null,"abstract":"The sight of a talker’s face dramatically influences the perception of auditory speech. This effect is most commonly observed when subjects are presented audiovisual (AV) stimuli in the presence of acoustic noise. However, the magnitude of the gain in perception that vision adds varies considerably in published work. Here we report data from an ongoing study of individual differences in AV speech perception when English words are presented in an acoustically noisy background. A large set of monosyllablic nouns was presented at 7 signal-to-noise ratios (pink noise) in both AV and auditory-only (AO) presentation modes. The stimuli were divided into 14 blocks of 25 words and each block was equated for spoken frequency using the SUBTLEXus database (Brysbaert and New, 2009). The presentation of the stimulus blocks was counterbalanced across subjects for noise level and presentation. In agreement with Sumby and Pollack (1954), the accuracy of both AO and AV increase monotonically with signal strength with the greatest visual gain being when the auditory signal was weakest. These average results mask considerable variability due to subject (individual differences in auditory and visual perception), stimulus (lexical type, token articulation) and presentation (signal and noise attributes) factors. We will discuss how these sources of variance impede comparisons between studies.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"123-123"},"PeriodicalIF":0.0000,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647568","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seeing and Perceiving","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1163/187847612X647568","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The sight of a talker’s face dramatically influences the perception of auditory speech. This effect is most commonly observed when subjects are presented audiovisual (AV) stimuli in the presence of acoustic noise. However, the magnitude of the gain in perception that vision adds varies considerably in published work. Here we report data from an ongoing study of individual differences in AV speech perception when English words are presented in an acoustically noisy background. A large set of monosyllablic nouns was presented at 7 signal-to-noise ratios (pink noise) in both AV and auditory-only (AO) presentation modes. The stimuli were divided into 14 blocks of 25 words and each block was equated for spoken frequency using the SUBTLEXus database (Brysbaert and New, 2009). The presentation of the stimulus blocks was counterbalanced across subjects for noise level and presentation. In agreement with Sumby and Pollack (1954), the accuracy of both AO and AV increase monotonically with signal strength with the greatest visual gain being when the auditory signal was weakest. These average results mask considerable variability due to subject (individual differences in auditory and visual perception), stimulus (lexical type, token articulation) and presentation (signal and noise attributes) factors. We will discuss how these sources of variance impede comparisons between studies.
看到说话人的脸会极大地影响对听觉语言的感知。当受试者在有噪声的情况下接受视听(AV)刺激时,最常观察到这种效应。然而,在已发表的作品中,视觉增加的感知增益的幅度差异很大。在这里,我们报告了一项正在进行的研究的数据,该研究是关于在嘈杂的背景下呈现英语单词时,AV语音感知的个体差异。在AV和纯听觉(AO)两种呈现模式下,以7种信噪比(粉红噪声)呈现大量单音节名词。这些刺激被分成14个块,每块25个单词,每个块使用精妙的数据库被等同于口语频率(Brysbaert and New, 2009)。刺激块的呈现在噪声水平和呈现上是平衡的。Sumby和Pollack(1954)认为,AO和AV的准确度随信号强度的增加而单调增加,听觉信号最弱时视觉增益最大。这些平均结果掩盖了由于主体(听觉和视觉感知的个体差异),刺激(词汇类型,标记发音)和表示(信号和噪声属性)因素造成的相当大的差异。我们将讨论这些方差来源如何阻碍研究之间的比较。