{"title":"早期的听觉感觉加工是由视觉机制促进的","authors":"Sonja Schall, S. Kiebel, B. Maess, K. Kriegstein","doi":"10.1163/187847612X648143","DOIUrl":null,"url":null,"abstract":"There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants ( n = 19 ) briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"184-185"},"PeriodicalIF":0.0000,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648143","citationCount":"0","resultStr":"{\"title\":\"Early auditory sensory processing is facilitated by visual mechanisms\",\"authors\":\"Sonja Schall, S. Kiebel, B. Maess, K. Kriegstein\",\"doi\":\"10.1163/187847612X648143\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants ( n = 19 ) briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.\",\"PeriodicalId\":49553,\"journal\":{\"name\":\"Seeing and Perceiving\",\"volume\":\"25 1\",\"pages\":\"184-185\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1163/187847612X648143\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Seeing and Perceiving\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1163/187847612X648143\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seeing and Perceiving","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1163/187847612X648143","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
有令人信服的证据表明,低水平的感觉区域对不止一种模态敏感。例如,听觉皮层只对视觉刺激作出反应(Calvert et al., 1997;Meyer et al., 2010;Pekkola等人,2005),相反,即使在只有听觉的条件下,视觉感觉区域也会对声源做出反应(Poirier等人,2005;von Kriegstein et al., 2008;von Kriegstein and Giraud, 2006)。目前,尚不清楚是什么使大脑激活特定模式的感觉区域,仅对不同模式的输入作出反应。一个原因可能是这种激活有助于输入模态的早期感觉处理——这一假设与当前教科书知识相反。在这里,我们通过利用一种时间高度分辨率的方法,即脑磁图(MEG)来验证这一假设,以确定视觉区域对纯听觉语音识别的时间反应特征。参与者(n = 19)简短地学习了一组视听声音,即在生态有效的情况下(如在日常生活中)与说话的面孔一起学习。一旦受试者能够识别这些熟悉的声音,我们就用脑磁图测量他们的大脑反应。研究结果揭示了熟悉说话者声音感知加工的两个关键机制:(i)视觉面部敏感梭状回在非常早期的听觉加工阶段激活,即在听觉开始后仅100毫秒;(ii)听觉加工的时间促进(M200)与识别性能的提高直接相关。这些发现表明,视觉区域在非常早期的听觉处理阶段就已经起作用了,并表明大脑使用视觉机制来优化对听觉刺激的感觉处理和识别。
Early auditory sensory processing is facilitated by visual mechanisms
There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants ( n = 19 ) briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.