{"title":"比较前语言规范化模型与美国英语听者的元音感知","authors":"Anna Persson, Florian Jaeger","doi":"10.36505/exling-2022/13/0037/000579","DOIUrl":null,"url":null,"abstract":"One of the central computational challenges for speech perception is that talkers differ in pronunciation--i.e., how they map linguistic categories and meanings onto the acoustic signal. Yet, listeners typically overcome these difficulties within minutes (Clarke & Garrett, 2004; Xie et al., 2018). The mechanisms that underlie these adaptive abilities remain unclear. One influential hypothesis holds that listeners achieve robust speech perception across talkers through low-level pre-linguistic normalization. We investigate the role of normalization in the perception of L1-US English vowels. We train ideal observers (IOs) on unnormalized or normalized acoustic cues using a phonetic database of 8 /h-VOWEL-d/ words of US English (N = 1240 recordings from 16 talkers, Xie & Jaeger, 2020). All IOs had 0 DFs in predicting perception—i.e., their predictions are completely determined by pronunciation statistics. We compare the IOs’ predictions against L1-US English listeners’ 8-way categorization responses for /h-VOWEL-d/ words in a web-based experiment. We find that (1) pre-linguistic normalization substantially improves the fit to human responses from 74% to 90% of best-possible performance (chance = 12.5%); (2) the best-performing normalization accounts centered and/or scaled formants by talker; and (3) general purpose normalization (C-CuRE, McMurray & Jongman, 2011) performed as well as vowel-specific normalization.","PeriodicalId":430803,"journal":{"name":"Proceedings of International Conference of Experimental Linguistics","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparing pre-linguistic normalization models against US English listeners’ vowel perception\",\"authors\":\"Anna Persson, Florian Jaeger\",\"doi\":\"10.36505/exling-2022/13/0037/000579\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One of the central computational challenges for speech perception is that talkers differ in pronunciation--i.e., how they map linguistic categories and meanings onto the acoustic signal. Yet, listeners typically overcome these difficulties within minutes (Clarke & Garrett, 2004; Xie et al., 2018). The mechanisms that underlie these adaptive abilities remain unclear. One influential hypothesis holds that listeners achieve robust speech perception across talkers through low-level pre-linguistic normalization. We investigate the role of normalization in the perception of L1-US English vowels. We train ideal observers (IOs) on unnormalized or normalized acoustic cues using a phonetic database of 8 /h-VOWEL-d/ words of US English (N = 1240 recordings from 16 talkers, Xie & Jaeger, 2020). All IOs had 0 DFs in predicting perception—i.e., their predictions are completely determined by pronunciation statistics. We compare the IOs’ predictions against L1-US English listeners’ 8-way categorization responses for /h-VOWEL-d/ words in a web-based experiment. We find that (1) pre-linguistic normalization substantially improves the fit to human responses from 74% to 90% of best-possible performance (chance = 12.5%); (2) the best-performing normalization accounts centered and/or scaled formants by talker; and (3) general purpose normalization (C-CuRE, McMurray & Jongman, 2011) performed as well as vowel-specific normalization.\",\"PeriodicalId\":430803,\"journal\":{\"name\":\"Proceedings of International Conference of Experimental Linguistics\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of International Conference of Experimental Linguistics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.36505/exling-2022/13/0037/000579\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of International Conference of Experimental Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.36505/exling-2022/13/0037/000579","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Comparing pre-linguistic normalization models against US English listeners’ vowel perception
One of the central computational challenges for speech perception is that talkers differ in pronunciation--i.e., how they map linguistic categories and meanings onto the acoustic signal. Yet, listeners typically overcome these difficulties within minutes (Clarke & Garrett, 2004; Xie et al., 2018). The mechanisms that underlie these adaptive abilities remain unclear. One influential hypothesis holds that listeners achieve robust speech perception across talkers through low-level pre-linguistic normalization. We investigate the role of normalization in the perception of L1-US English vowels. We train ideal observers (IOs) on unnormalized or normalized acoustic cues using a phonetic database of 8 /h-VOWEL-d/ words of US English (N = 1240 recordings from 16 talkers, Xie & Jaeger, 2020). All IOs had 0 DFs in predicting perception—i.e., their predictions are completely determined by pronunciation statistics. We compare the IOs’ predictions against L1-US English listeners’ 8-way categorization responses for /h-VOWEL-d/ words in a web-based experiment. We find that (1) pre-linguistic normalization substantially improves the fit to human responses from 74% to 90% of best-possible performance (chance = 12.5%); (2) the best-performing normalization accounts centered and/or scaled formants by talker; and (3) general purpose normalization (C-CuRE, McMurray & Jongman, 2011) performed as well as vowel-specific normalization.