Maximilian Reger, Oleg Vrabie, Gregor Volberg, Angelika Lingnau
{"title":"Actions at a glance: The time course of action, object, and scene recognition in a free recall paradigm.","authors":"Maximilian Reger, Oleg Vrabie, Gregor Volberg, Angelika Lingnau","doi":"10.3758/s13415-025-01272-6","DOIUrl":null,"url":null,"abstract":"<p><p>Being able to quickly recognize other people's actions lies at the heart of our ability to efficiently interact with our environment. Action recognition has been suggested to rely on the analysis and integration of information from different perceptual subsystems, e.g., for the processing of objects and scenes. However, stimulus presentation times that are required to extract information about actions, objects, and scenes to our knowledge have not yet been directly compared. To address this gap in the literature, we compared the recognition thresholds for actions, objects, and scenes. First, 30 participants were presented with grayscale images depicting different actions at variable presentation times (33-500 ms) and provided written descriptions of each image. Next, ten naïve raters evaluated these descriptions with respect to the presence and accuracy of information related to actions, objects, scenes, and sensory information. Comparing thresholds across presentation times, we found that recognizing actions required shorter presentation times (from 60 ms onwards) than objects (68 ms) and scenes (84 ms). More specific actions required presentation times of approximately 100 ms. Moreover, thresholds were modulated by action category, with the lowest thresholds for locomotion and the highest thresholds for food-related actions. Together, our data suggest that perceptual evidence for actions, objects, and scenes is gathered in parallel when these are presented in the same scene but accumulates faster for actions that reflect static body posture recognition than for objects and scenes.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Affective & Behavioral Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3758/s13415-025-01272-6","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Being able to quickly recognize other people's actions lies at the heart of our ability to efficiently interact with our environment. Action recognition has been suggested to rely on the analysis and integration of information from different perceptual subsystems, e.g., for the processing of objects and scenes. However, stimulus presentation times that are required to extract information about actions, objects, and scenes to our knowledge have not yet been directly compared. To address this gap in the literature, we compared the recognition thresholds for actions, objects, and scenes. First, 30 participants were presented with grayscale images depicting different actions at variable presentation times (33-500 ms) and provided written descriptions of each image. Next, ten naïve raters evaluated these descriptions with respect to the presence and accuracy of information related to actions, objects, scenes, and sensory information. Comparing thresholds across presentation times, we found that recognizing actions required shorter presentation times (from 60 ms onwards) than objects (68 ms) and scenes (84 ms). More specific actions required presentation times of approximately 100 ms. Moreover, thresholds were modulated by action category, with the lowest thresholds for locomotion and the highest thresholds for food-related actions. Together, our data suggest that perceptual evidence for actions, objects, and scenes is gathered in parallel when these are presented in the same scene but accumulates faster for actions that reflect static body posture recognition than for objects and scenes.
期刊介绍:
Cognitive, Affective, & Behavioral Neuroscience (CABN) offers theoretical, review, and primary research articles on behavior and brain processes in humans. Coverage includes normal function as well as patients with injuries or processes that influence brain function: neurological disorders, including both healthy and disordered aging; and psychiatric disorders such as schizophrenia and depression. CABN is the leading vehicle for strongly psychologically motivated studies of brain–behavior relationships, through the presentation of papers that integrate psychological theory and the conduct and interpretation of the neuroscientific data. The range of topics includes perception, attention, memory, language, problem solving, reasoning, and decision-making; emotional processes, motivation, reward prediction, and affective states; and individual differences in relevant domains, including personality. Cognitive, Affective, & Behavioral Neuroscience is a publication of the Psychonomic Society.