AI-smartphone markerless motion capturing of hip, knee, and ankle joint kinematics during countermovement jumps

Philipp Barzyk, Philip Zimmermann, Manuel Stein, Daniel Keim, Markus Gruber
{"title":"AI-smartphone markerless motion capturing of hip, knee, and ankle joint kinematics during countermovement jumps","authors":"Philipp Barzyk,&nbsp;Philip Zimmermann,&nbsp;Manuel Stein,&nbsp;Daniel Keim,&nbsp;Markus Gruber","doi":"10.1002/ejsc.12186","DOIUrl":null,"url":null,"abstract":"<p>Recently, AI-driven skeleton reconstruction tools that use multistage computer vision pipelines were designed to estimate 3D kinematics from 2D video sequences. In the present study, we validated a novel markerless, smartphone video-based artificial intelligence (AI) motion capture system for hip, knee, and ankle angles during countermovement jumps (CMJs). Eleven participants performed six CMJs. We used 2D videos created by a smartphone (Apple iPhone X, 4K, 60 fps) to create 24 different keypoints, which together built a full skeleton including joints and their connections. Body parts and skeletal keypoints were localized by calculating confidence maps using a multilevel convolutional neural network that integrated both spatial and temporal features. We calculated hip, knee, and ankle angles in the sagittal plane and compared it with the angles measured by a VICON system. We calculated the correlation between both method's angular progressions, mean squared error (MSE), mean average error (MAE), and the maximum and minimum angular error and run statistical parametric mapping (SPM) analysis. Pearson correlation coefficients (r) for hip, knee, and ankle angular progressions in the sagittal plane during the entire movement were 0.96, 0.99, and 0.87, respectively. SPM group-analysis revealed some significant differences only for ankle angular progression. MSE was below 5.7°, MAE was below 4.5°, and error for maximum amplitudes was below 3.2°. The smartphone AI motion capture system with the trained multistage computer vision pipeline was able to detect, especially hip and knee angles in the sagittal plane during CMJs with high precision from a frontal view only.</p>","PeriodicalId":93999,"journal":{"name":"European journal of sport science","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11451555/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European journal of sport science","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ejsc.12186","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, AI-driven skeleton reconstruction tools that use multistage computer vision pipelines were designed to estimate 3D kinematics from 2D video sequences. In the present study, we validated a novel markerless, smartphone video-based artificial intelligence (AI) motion capture system for hip, knee, and ankle angles during countermovement jumps (CMJs). Eleven participants performed six CMJs. We used 2D videos created by a smartphone (Apple iPhone X, 4K, 60 fps) to create 24 different keypoints, which together built a full skeleton including joints and their connections. Body parts and skeletal keypoints were localized by calculating confidence maps using a multilevel convolutional neural network that integrated both spatial and temporal features. We calculated hip, knee, and ankle angles in the sagittal plane and compared it with the angles measured by a VICON system. We calculated the correlation between both method's angular progressions, mean squared error (MSE), mean average error (MAE), and the maximum and minimum angular error and run statistical parametric mapping (SPM) analysis. Pearson correlation coefficients (r) for hip, knee, and ankle angular progressions in the sagittal plane during the entire movement were 0.96, 0.99, and 0.87, respectively. SPM group-analysis revealed some significant differences only for ankle angular progression. MSE was below 5.7°, MAE was below 4.5°, and error for maximum amplitudes was below 3.2°. The smartphone AI motion capture system with the trained multistage computer vision pipeline was able to detect, especially hip and knee angles in the sagittal plane during CMJs with high precision from a frontal view only.

Abstract Image

反向运动跳跃时髋关节、膝关节和踝关节运动学的人工智能智能手机无标记运动捕捉。
最近,人们设计了使用多级计算机视觉管道的人工智能驱动骨架重建工具,以便从二维视频序列中估计三维运动学。在本研究中,我们验证了一种新型无标记、基于智能手机视频的人工智能(AI)运动捕捉系统,该系统可捕捉反向运动跳跃(CMJ)过程中的髋关节、膝关节和踝关节角度。11 名参与者进行了 6 次 CMJ。我们使用智能手机(苹果 iPhone X,4K,60 fps)创建的二维视频来创建 24 个不同的关键点,这些关键点共同构建了一个包括关节及其连接在内的完整骨骼。通过使用多级卷积神经网络计算置信度图来定位身体部位和骨骼关键点,该网络集成了空间和时间特征。我们计算了矢状面上的髋关节、膝关节和踝关节角度,并与 VICON 系统测量的角度进行了比较。我们计算了两种方法的角度变化之间的相关性、平均平方误差(MSE)、平均平均误差(MAE)以及最大和最小角度误差,并进行了统计参数映射(SPM)分析。在整个运动过程中,髋关节、膝关节和踝关节在矢状面上的角度变化的皮尔逊相关系数(r)分别为 0.96、0.99 和 0.87。SPM 组分析显示,只有踝关节角度变化存在显著差异。MSE 低于 5.7°,MAE 低于 4.5°,最大振幅误差低于 3.2°。智能手机人工智能运动捕捉系统与训练有素的多级计算机视觉管道能够仅从正面视图高精度地检测 CMJ 期间矢状面上的髋关节和膝关节角度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信