Automatic infant 2D pose estimation from videos: Comparing seven deep neural network methods.

IF 3.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Filipe Gama, Matěj Mísař, Lukáš Navara, Sergiu T Popescu, Matej Hoffmann
{"title":"Automatic infant 2D pose estimation from videos: Comparing seven deep neural network methods.","authors":"Filipe Gama, Matěj Mísař, Lukáš Navara, Sergiu T Popescu, Matej Hoffmann","doi":"10.3758/s13428-025-02816-x","DOIUrl":null,"url":null,"abstract":"<p><p>Automatic markerless estimation of infant posture and motion from ordinary videos carries great potential for movement studies \"in the wild\", facilitating understanding of motor development and massively increasing the chances of early diagnosis of disorders. There has been a rapid development of human pose estimation methods in computer vision, thanks to advances in deep learning and machine learning. However, these methods are trained on datasets that feature adults in different contexts. This work tests and compares seven popular methods (AlphaPose, DeepLabCut/DeeperCut, Detectron2, HRNet, MediaPipe/BlazePose, OpenPose, and ViTPose) on videos of infants in supine position and in more complex settings. Surprisingly, all methods except DeepLabCut and MediaPipe exhibit competitive performance without additional fine-tuning, with ViTPose performing the best. Next to standard performance metrics (average precision and recall), we introduce errors expressed in the neck-mid-hip (torso length) ratio and additionally study missing and redundant detections, and the reliability of the internal confidence ratings of the different methods, which are relevant for downstream tasks. Among the networks with competitive performance, only AlphaPose could run at close to real-time speed (27 fps) on our machine. We provide documented Docker containers or instructions for all the methods we used, our analysis scripts, and the processed data at https://hub.docker.com/u/humanoidsctu and https://osf.io/x465b/ .</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 10","pages":"280"},"PeriodicalIF":3.9000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-025-02816-x","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Automatic markerless estimation of infant posture and motion from ordinary videos carries great potential for movement studies "in the wild", facilitating understanding of motor development and massively increasing the chances of early diagnosis of disorders. There has been a rapid development of human pose estimation methods in computer vision, thanks to advances in deep learning and machine learning. However, these methods are trained on datasets that feature adults in different contexts. This work tests and compares seven popular methods (AlphaPose, DeepLabCut/DeeperCut, Detectron2, HRNet, MediaPipe/BlazePose, OpenPose, and ViTPose) on videos of infants in supine position and in more complex settings. Surprisingly, all methods except DeepLabCut and MediaPipe exhibit competitive performance without additional fine-tuning, with ViTPose performing the best. Next to standard performance metrics (average precision and recall), we introduce errors expressed in the neck-mid-hip (torso length) ratio and additionally study missing and redundant detections, and the reliability of the internal confidence ratings of the different methods, which are relevant for downstream tasks. Among the networks with competitive performance, only AlphaPose could run at close to real-time speed (27 fps) on our machine. We provide documented Docker containers or instructions for all the methods we used, our analysis scripts, and the processed data at https://hub.docker.com/u/humanoidsctu and https://osf.io/x465b/ .

基于视频的婴儿2D姿态自动估计:比较7种深度神经网络方法。
从普通视频中对婴儿姿势和动作进行自动无标记估计,为“野外”运动研究提供了巨大的潜力,有助于理解运动发育,并大大增加了早期诊断疾病的机会。由于深度学习和机器学习的进步,计算机视觉中的人体姿态估计方法得到了迅速发展。然而,这些方法是在不同背景下以成年人为特征的数据集上训练的。这项工作测试和比较了7种流行的方法(AlphaPose, DeepLabCut/DeeperCut, Detectron2, HRNet, MediaPipe/BlazePose, OpenPose和ViTPose)在婴儿的视频在平卧位置和更复杂的设置。令人惊讶的是,除了DeepLabCut和MediaPipe之外,所有方法都显示出具有竞争力的性能,无需额外的微调,其中ViTPose表现最好。在标准性能指标(平均精度和召回率)之后,我们引入了颈-中臀(躯干长度)比所表示的误差,并进一步研究了缺失和冗余检测,以及与下游任务相关的不同方法的内部置信度评级的可靠性。在具有竞争性能的网络中,只有AlphaPose可以在我们的机器上以接近实时的速度(27 fps)运行。我们在https://hub.docker.com/u/humanoidsctu和https://osf.io/x465b/上为我们使用的所有方法、分析脚本和处理的数据提供了文档化的Docker容器或说明。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.30
自引率
9.30%
发文量
266
期刊介绍: Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信