Assessing attentiveness and cognitive engagement across tasks using video-based action understanding in non-human primates

IF 2.3 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS
Sin-Man Cheung , Adam Neumann , Thilo Womelsdorf
{"title":"Assessing attentiveness and cognitive engagement across tasks using video-based action understanding in non-human primates","authors":"Sin-Man Cheung ,&nbsp;Adam Neumann ,&nbsp;Thilo Womelsdorf","doi":"10.1016/j.jneumeth.2025.110597","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Distractibility and attentiveness are cognitive states that are expressed through observable behavior, but how behavioral features can be used to quantify these cognitive states has remained poorly understood. Video-based analysis promises to be a versatile tool to quantify the behavioral features that reflect subject-specific distractibility and attentiveness and are diagnostic of cognitive states.</div></div><div><h3>New method</h3><div>We describe an analysis pipeline that classifies cognitive states using a 2-camera set-up for video-based estimation of attentiveness and screen engagement in nonhuman primates performing cognitive tasks. The procedure reconstructs 3D poses from 2D labeled DeepLabCut videos, reconstructs the head/yaw orientation relative to a task screen, and arm/hand/wrist engagements with task objects, to segment behavior into an attentiveness and engagement score.</div></div><div><h3>Results</h3><div>Performance of different cognitive tasks was robustly classified from video within a few frames, reaching &gt; 90 % decoding accuracy with ≤ 3 min long time segments. The analysis procedure allows adjusting thresholds for segmenting subject-specific movements for a time-resolved scoring of attentiveness and screen engagement.</div></div><div><h3>Comparison with existing methods</h3><div>Current methods also extract poses and segment action units; however, they haven't been combined into a framework that enables subject-adjusted thresholding for specific task contexts. This integration is needed for inferring cognitive state variables and differentiating performance across various tasks.</div></div><div><h3>Conclusion</h3><div>The proposed method integrates video segmentation, scoring of attentiveness and screen engagement, and classification of task performance at high temporal resolution. This integrated framework provides a tool for assessing attention functions from video.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110597"},"PeriodicalIF":2.3000,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Neuroscience Methods","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165027025002419","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Distractibility and attentiveness are cognitive states that are expressed through observable behavior, but how behavioral features can be used to quantify these cognitive states has remained poorly understood. Video-based analysis promises to be a versatile tool to quantify the behavioral features that reflect subject-specific distractibility and attentiveness and are diagnostic of cognitive states.

New method

We describe an analysis pipeline that classifies cognitive states using a 2-camera set-up for video-based estimation of attentiveness and screen engagement in nonhuman primates performing cognitive tasks. The procedure reconstructs 3D poses from 2D labeled DeepLabCut videos, reconstructs the head/yaw orientation relative to a task screen, and arm/hand/wrist engagements with task objects, to segment behavior into an attentiveness and engagement score.

Results

Performance of different cognitive tasks was robustly classified from video within a few frames, reaching > 90 % decoding accuracy with ≤ 3 min long time segments. The analysis procedure allows adjusting thresholds for segmenting subject-specific movements for a time-resolved scoring of attentiveness and screen engagement.

Comparison with existing methods

Current methods also extract poses and segment action units; however, they haven't been combined into a framework that enables subject-adjusted thresholding for specific task contexts. This integration is needed for inferring cognitive state variables and differentiating performance across various tasks.

Conclusion

The proposed method integrates video segmentation, scoring of attentiveness and screen engagement, and classification of task performance at high temporal resolution. This integrated framework provides a tool for assessing attention functions from video.
使用基于视频的动作理解评估非人类灵长类动物在任务中的注意力和认知参与
分心和注意力是通过可观察的行为来表达的认知状态,但是如何使用行为特征来量化这些认知状态仍然知之甚少。基于视频的分析有望成为一种多功能的工具,用于量化反映受试者特定的注意力分散和注意力的行为特征,并用于诊断认知状态。我们描述了一种分析管道,该管道使用双摄像头设置对非人类灵长类动物执行认知任务的注意力和屏幕参与进行基于视频的估计,从而对认知状态进行分类。该程序从2D标记的DeepLabCut视频中重建3D姿势,重建相对于任务屏幕的头部/偏转方向,以及手臂/手/手腕与任务对象的接触,将行为分割为注意力和参与度评分。结果在几帧内对不同认知任务的表现进行了鲁棒性分类,在≤ 3 min长的时间段内达到了>; 90 %的解码准确率。分析程序允许调整阈值,以分割特定于受试者的运动,以获得注意力和屏幕参与的时间解决评分。现有方法的比较目前的方法还可以提取姿态和分割动作单元;但是,它们还没有组合成一个框架,以支持针对特定任务上下文的主题调整阈值。这种整合对于推断认知状态变量和区分不同任务的表现是必需的。结论该方法融合了高时间分辨率下的视频分割、注意力和屏幕参与度评分以及任务表现分类。这个综合框架为评估视频的注意力功能提供了一个工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Neuroscience Methods
Journal of Neuroscience Methods 医学-神经科学
CiteScore
7.10
自引率
3.30%
发文量
226
审稿时长
52 days
期刊介绍: The Journal of Neuroscience Methods publishes papers that describe new methods that are specifically for neuroscience research conducted in invertebrates, vertebrates or in man. Major methodological improvements or important refinements of established neuroscience methods are also considered for publication. The Journal''s Scope includes all aspects of contemporary neuroscience research, including anatomical, behavioural, biochemical, cellular, computational, molecular, invasive and non-invasive imaging, optogenetic, and physiological research investigations.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信