{"title":"Apprehending relational Events: The visual world paradigm and the Interplay of event perception and language.","authors":"Alon Hafri, John C Trueswell","doi":"10.1016/j.brainres.2025.150000","DOIUrl":null,"url":null,"abstract":"<p><p>When we observe the world, we appreciate not only the colors, shapes, and textures of people and objects, but also how they interact with one another (e.g., in events such as a girl pushing a boy). Although it might seem intuitive that extracting events and other relations would require active effort and multiple fixations, a growing body of vision research suggests that humans rapidly and automatically extract relational information-including the structure of events (i.e., who is acting on whom)-from a single fixation. These findings suggest that aspects of event structure can often be perceived without extensive visual interrogation. Yet despite this progress, much remains unknown about how visual events are perceived and represented-particularly for complex events (e.g., those involving roles beyond Agent and Patient, or events with multiple salient construals, such as chase vs. flee)-and how these emerging representations interact with language during interpretation and production of utterances about events. The visual world paradigm (VWP) offers a powerful tool to address these questions about the perception-language interface, by revealing which event representations are active when and by probing how language may guide event construal in real-time. We review eyetracking work in VWP studies of language comprehension and language production, as well as in related tasks, that provide initial insights into online event apprehension. This work suggests that (1) event apprehension and linguistic encoding are closely coordinated, interacting earlier and more continuously than previously recognized, and (2) fixations may serve to refine or disambiguate relational information extracted during initial processing- such as identifying event participants or clarifying their roles (e.g., as Instruments, Goals, or Recipients)-with language in some cases guiding attentional prioritization towards certain event components. More generally, this perspective offers a foundation for future VWP research exploring the dynamic relationship between seeing, listening, and speaking.</p>","PeriodicalId":9083,"journal":{"name":"Brain Research","volume":" ","pages":"150000"},"PeriodicalIF":2.6000,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.brainres.2025.150000","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
When we observe the world, we appreciate not only the colors, shapes, and textures of people and objects, but also how they interact with one another (e.g., in events such as a girl pushing a boy). Although it might seem intuitive that extracting events and other relations would require active effort and multiple fixations, a growing body of vision research suggests that humans rapidly and automatically extract relational information-including the structure of events (i.e., who is acting on whom)-from a single fixation. These findings suggest that aspects of event structure can often be perceived without extensive visual interrogation. Yet despite this progress, much remains unknown about how visual events are perceived and represented-particularly for complex events (e.g., those involving roles beyond Agent and Patient, or events with multiple salient construals, such as chase vs. flee)-and how these emerging representations interact with language during interpretation and production of utterances about events. The visual world paradigm (VWP) offers a powerful tool to address these questions about the perception-language interface, by revealing which event representations are active when and by probing how language may guide event construal in real-time. We review eyetracking work in VWP studies of language comprehension and language production, as well as in related tasks, that provide initial insights into online event apprehension. This work suggests that (1) event apprehension and linguistic encoding are closely coordinated, interacting earlier and more continuously than previously recognized, and (2) fixations may serve to refine or disambiguate relational information extracted during initial processing- such as identifying event participants or clarifying their roles (e.g., as Instruments, Goals, or Recipients)-with language in some cases guiding attentional prioritization towards certain event components. More generally, this perspective offers a foundation for future VWP research exploring the dynamic relationship between seeing, listening, and speaking.
期刊介绍:
An international multidisciplinary journal devoted to fundamental research in the brain sciences.
Brain Research publishes papers reporting interdisciplinary investigations of nervous system structure and function that are of general interest to the international community of neuroscientists. As is evident from the journals name, its scope is broad, ranging from cellular and molecular studies through systems neuroscience, cognition and disease. Invited reviews are also published; suggestions for and inquiries about potential reviews are welcomed.
With the appearance of the final issue of the 2011 subscription, Vol. 67/1-2 (24 June 2011), Brain Research Reviews has ceased publication as a distinct journal separate from Brain Research. Review articles accepted for Brain Research are now published in that journal.