Bianca Sisinni, Mirko Grimaldi, Elisa Tundo, A. Calabrese
{"title":"Visual attention during L1 and L2 sounds perception: an eye-tracking study","authors":"Bianca Sisinni, Mirko Grimaldi, Elisa Tundo, A. Calabrese","doi":"10.36505/exling-2010/03/0043/000163","DOIUrl":null,"url":null,"abstract":"Visual information affects speech perception as demonstrated by the McGurk effect (McGurk & McDonald, 1976): when audio /ba/ is dubbed with a visual /ga/, what is perceived is /da/. This study aims at observing how visual information, intended as articulatory orofacial movements, is processed by eye, i.e., if gaze is related to articulatory information processing. The results indicate that visual attentional resources seem to be higher during multisensory (AV) than unisensory (A; V) presentation. Probably, higher visual attentional resources are needed to integrate inputs coming from different sources. Moreover, audiovisual speech perception seems to be similar across languages (e.g., Chen & Massaro, 2004) and not language-specific (Ghazanfar et al., 2005).","PeriodicalId":447857,"journal":{"name":"ISCA Tutorial and Research Workshop on Experimental Linguistics","volume":"74 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISCA Tutorial and Research Workshop on Experimental Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.36505/exling-2010/03/0043/000163","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Visual information affects speech perception as demonstrated by the McGurk effect (McGurk & McDonald, 1976): when audio /ba/ is dubbed with a visual /ga/, what is perceived is /da/. This study aims at observing how visual information, intended as articulatory orofacial movements, is processed by eye, i.e., if gaze is related to articulatory information processing. The results indicate that visual attentional resources seem to be higher during multisensory (AV) than unisensory (A; V) presentation. Probably, higher visual attentional resources are needed to integrate inputs coming from different sources. Moreover, audiovisual speech perception seems to be similar across languages (e.g., Chen & Massaro, 2004) and not language-specific (Ghazanfar et al., 2005).