{"title":"使用听觉和视觉进行运动预测,运动感知和定位。","authors":"Yichen Yuan, Nathan Van der Stoep, Surya Gayet","doi":"10.1037/xge0001725","DOIUrl":null,"url":null,"abstract":"<p><p>Predicting the location of moving objects in noisy environments is essential to everyday behavior, like when participating in traffic. Although many objects provide multisensory information, it remains unknown how humans use multisensory information to localize moving objects, and how this depends on expected sensory interference (e.g., occlusion). In four experiments, we systematically investigated localization for auditory, visual, and audiovisual targets (AV). Performance for audiovisual targets was compared to performance predicted by maximum likelihood estimation (MLE). In Experiment 1A, moving targets were occluded by an audiovisual occluder, and their final locations had to be inferred from target speed and occlusion duration. Participants relied exclusively on the visual component of the audiovisual target, even though the auditory component demonstrably provided useful location information when presented in isolation. In contrast, when a visual-only occluder was used in Experiment 1B, participants relied exclusively on the auditory component of the audiovisual target, even though the visual component demonstrably provided useful location information when presented in isolation. In Experiment 2, although localization estimates were in line with MLE predictions, no multisensory precision benefits were found when participants localized moving audiovisual target. In Experiment 3, a substantial multisensory benefit was found when participants localized static audiovisual target, showing near-MLE integration. In sum, observers use both hearing and vision when localizing static objects, but use only unisensory input when localizing moving objects and predicting motion under occlusion. Moreover, observers can flexibly prioritize one sense over the other, in anticipation of modality-specific interference. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":15698,"journal":{"name":"Journal of Experimental Psychology: General","volume":" ","pages":"1351-1367"},"PeriodicalIF":3.7000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Using hearing and vision for motion prediction, motion perception, and localization.\",\"authors\":\"Yichen Yuan, Nathan Van der Stoep, Surya Gayet\",\"doi\":\"10.1037/xge0001725\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Predicting the location of moving objects in noisy environments is essential to everyday behavior, like when participating in traffic. Although many objects provide multisensory information, it remains unknown how humans use multisensory information to localize moving objects, and how this depends on expected sensory interference (e.g., occlusion). In four experiments, we systematically investigated localization for auditory, visual, and audiovisual targets (AV). Performance for audiovisual targets was compared to performance predicted by maximum likelihood estimation (MLE). In Experiment 1A, moving targets were occluded by an audiovisual occluder, and their final locations had to be inferred from target speed and occlusion duration. Participants relied exclusively on the visual component of the audiovisual target, even though the auditory component demonstrably provided useful location information when presented in isolation. In contrast, when a visual-only occluder was used in Experiment 1B, participants relied exclusively on the auditory component of the audiovisual target, even though the visual component demonstrably provided useful location information when presented in isolation. In Experiment 2, although localization estimates were in line with MLE predictions, no multisensory precision benefits were found when participants localized moving audiovisual target. In Experiment 3, a substantial multisensory benefit was found when participants localized static audiovisual target, showing near-MLE integration. In sum, observers use both hearing and vision when localizing static objects, but use only unisensory input when localizing moving objects and predicting motion under occlusion. Moreover, observers can flexibly prioritize one sense over the other, in anticipation of modality-specific interference. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>\",\"PeriodicalId\":15698,\"journal\":{\"name\":\"Journal of Experimental Psychology: General\",\"volume\":\" \",\"pages\":\"1351-1367\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Experimental Psychology: General\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/xge0001725\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/27 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental Psychology: General","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/xge0001725","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
摘要
在嘈杂的环境中预测移动物体的位置对日常行为至关重要,比如在参与交通时。尽管许多物体提供了多感官信息,但人类如何使用多感官信息来定位运动物体,以及这如何依赖于预期的感官干扰(例如遮挡),仍然是未知的。在四个实验中,我们系统地研究了听觉、视觉和视听目标(AV)的定位。将视听目标的性能与最大似然估计(MLE)预测的性能进行比较。在实验1A中,运动目标被视听遮挡器遮挡,其最终位置必须从目标速度和遮挡时间推断。参与者完全依赖视听目标的视觉部分,尽管听觉部分在单独呈现时显然提供了有用的位置信息。相比之下,当实验1B中使用纯视觉遮挡器时,参与者完全依赖视听目标的听觉成分,尽管视觉成分在单独呈现时明显提供了有用的位置信息。在实验2中,虽然定位估计与MLE预测一致,但当参与者定位运动视听目标时,没有发现多感官精度的好处。在实验3中,当参与者定位静态视听目标时,发现了大量的多感官益处,表现出接近mle的整合。总之,观察者在定位静态物体时同时使用听觉和视觉,但在定位运动物体和预测遮挡下的运动时只使用感官输入。此外,观察者可以灵活地优先考虑一种感觉而不是另一种感觉,以预测特定模式的干扰。(PsycInfo Database Record (c) 2025 APA,版权所有)。
Using hearing and vision for motion prediction, motion perception, and localization.
Predicting the location of moving objects in noisy environments is essential to everyday behavior, like when participating in traffic. Although many objects provide multisensory information, it remains unknown how humans use multisensory information to localize moving objects, and how this depends on expected sensory interference (e.g., occlusion). In four experiments, we systematically investigated localization for auditory, visual, and audiovisual targets (AV). Performance for audiovisual targets was compared to performance predicted by maximum likelihood estimation (MLE). In Experiment 1A, moving targets were occluded by an audiovisual occluder, and their final locations had to be inferred from target speed and occlusion duration. Participants relied exclusively on the visual component of the audiovisual target, even though the auditory component demonstrably provided useful location information when presented in isolation. In contrast, when a visual-only occluder was used in Experiment 1B, participants relied exclusively on the auditory component of the audiovisual target, even though the visual component demonstrably provided useful location information when presented in isolation. In Experiment 2, although localization estimates were in line with MLE predictions, no multisensory precision benefits were found when participants localized moving audiovisual target. In Experiment 3, a substantial multisensory benefit was found when participants localized static audiovisual target, showing near-MLE integration. In sum, observers use both hearing and vision when localizing static objects, but use only unisensory input when localizing moving objects and predicting motion under occlusion. Moreover, observers can flexibly prioritize one sense over the other, in anticipation of modality-specific interference. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
期刊介绍:
The Journal of Experimental Psychology: General publishes articles describing empirical work that bridges the traditional interests of two or more communities of psychology. The work may touch on issues dealt with in JEP: Learning, Memory, and Cognition, JEP: Human Perception and Performance, JEP: Animal Behavior Processes, or JEP: Applied, but may also concern issues in other subdisciplines of psychology, including social processes, developmental processes, psychopathology, neuroscience, or computational modeling. Articles in JEP: General may be longer than the usual journal publication if necessary, but shorter articles that bridge subdisciplines will also be considered.