Actions at a glance: The time course of action, object, and scene recognition in a free recall paradigm.

IF 2.5 3区 医学 Q2 BEHAVIORAL SCIENCES
Maximilian Reger, Oleg Vrabie, Gregor Volberg, Angelika Lingnau
{"title":"Actions at a glance: The time course of action, object, and scene recognition in a free recall paradigm.","authors":"Maximilian Reger, Oleg Vrabie, Gregor Volberg, Angelika Lingnau","doi":"10.3758/s13415-025-01272-6","DOIUrl":null,"url":null,"abstract":"<p><p>Being able to quickly recognize other people's actions lies at the heart of our ability to efficiently interact with our environment. Action recognition has been suggested to rely on the analysis and integration of information from different perceptual subsystems, e.g., for the processing of objects and scenes. However, stimulus presentation times that are required to extract information about actions, objects, and scenes to our knowledge have not yet been directly compared. To address this gap in the literature, we compared the recognition thresholds for actions, objects, and scenes. First, 30 participants were presented with grayscale images depicting different actions at variable presentation times (33-500 ms) and provided written descriptions of each image. Next, ten naïve raters evaluated these descriptions with respect to the presence and accuracy of information related to actions, objects, scenes, and sensory information. Comparing thresholds across presentation times, we found that recognizing actions required shorter presentation times (from 60 ms onwards) than objects (68 ms) and scenes (84 ms). More specific actions required presentation times of approximately 100 ms. Moreover, thresholds were modulated by action category, with the lowest thresholds for locomotion and the highest thresholds for food-related actions. Together, our data suggest that perceptual evidence for actions, objects, and scenes is gathered in parallel when these are presented in the same scene but accumulates faster for actions that reflect static body posture recognition than for objects and scenes.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Affective & Behavioral Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3758/s13415-025-01272-6","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Being able to quickly recognize other people's actions lies at the heart of our ability to efficiently interact with our environment. Action recognition has been suggested to rely on the analysis and integration of information from different perceptual subsystems, e.g., for the processing of objects and scenes. However, stimulus presentation times that are required to extract information about actions, objects, and scenes to our knowledge have not yet been directly compared. To address this gap in the literature, we compared the recognition thresholds for actions, objects, and scenes. First, 30 participants were presented with grayscale images depicting different actions at variable presentation times (33-500 ms) and provided written descriptions of each image. Next, ten naïve raters evaluated these descriptions with respect to the presence and accuracy of information related to actions, objects, scenes, and sensory information. Comparing thresholds across presentation times, we found that recognizing actions required shorter presentation times (from 60 ms onwards) than objects (68 ms) and scenes (84 ms). More specific actions required presentation times of approximately 100 ms. Moreover, thresholds were modulated by action category, with the lowest thresholds for locomotion and the highest thresholds for food-related actions. Together, our data suggest that perceptual evidence for actions, objects, and scenes is gathered in parallel when these are presented in the same scene but accumulates faster for actions that reflect static body posture recognition than for objects and scenes.

求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.00
自引率
3.40%
发文量
64
审稿时长
6-12 weeks
期刊介绍: Cognitive, Affective, & Behavioral Neuroscience (CABN) offers theoretical, review, and primary research articles on behavior and brain processes in humans. Coverage includes normal function as well as patients with injuries or processes that influence brain function: neurological disorders, including both healthy and disordered aging; and psychiatric disorders such as schizophrenia and depression. CABN is the leading vehicle for strongly psychologically motivated studies of brain–behavior relationships, through the presentation of papers that integrate psychological theory and the conduct and interpretation of the neuroscientific data. The range of topics includes perception, attention, memory, language, problem solving, reasoning, and decision-making; emotional processes, motivation, reward prediction, and affective states; and individual differences in relevant domains, including personality. Cognitive, Affective, & Behavioral Neuroscience is a publication of the Psychonomic Society.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信