评估上肢日常生活活动中单摄像头无标记动作捕捉

Bradley Scott, Edward Chadwick, Mhairi McInnes, Dimitra Blana
{"title":"评估上肢日常生活活动中单摄像头无标记动作捕捉","authors":"Bradley Scott, Edward Chadwick, Mhairi McInnes, Dimitra Blana","doi":"10.1016/j.gaitpost.2023.07.222","DOIUrl":null,"url":null,"abstract":"In a recent scoping review (Scott et al., 2022) we discussed how single camera markerless motion capture (SCMoCap) may help to facilitate motion analysis in situations where it would otherwise not be possible, such as at-home rehabilitation for children with cerebral palsy (Kidziński et al., 2020), and more frequent data collection. However, few studies reported error of measurement in a clinically interpretable manner and there is little evidence assessing SCMoCap during upper limb activities of daily living. Presenting a comprehensive validation of SCMoCap, alongside clinically meaningful evaluation of results would be invaluable for clinicians and future researchers who are interested in implementing upper limb movement analysis into clinical practice (Philp et al., 2021). Are state-of-the-art single camera markerless motion capture methods suitable for measuring joint angles during a typical upper-limb functional assessment? Study participants were instructed to perform a compressive collection of physiological and functional movements that are typically part of an upper limb functional assessment. Movements were repeated 3 times for both the frontal and sagittal planes. Movements were recorded simultaneously with a 10-camera OptiTrack Prime 13 W marker-based motion capture setup (NaturalPoint, USA) and Azure Kinect camera (Microsoft, USA). An eSync2 synchronization device (NaturalPoint, USA) was used to avoid exposure interference between systems. Marker-based bony landmarks and joint centers were collected as recommended by the International Society of Biomechanics (Wu et al., 2005). Marker-based trajectories were processed using MotionMonitor xGen (Innovative Sports Training, USA), where a 20 Hz lowpass Butterworth filter was applied to marker positions. Markerless joint center positions were calculated using Azure Kinect body tracking. Markerless positions were filtered using a 10 Hz lowpass Butterworth filter, then upsampled to 120 Hz matching the OptiTrack recording frequency. Signals were time synchronized using cross correlation. Joint angles were calculated by solving inverse kinematics in OpenSim using Hamner’s model (Hamner, Seth & Delp, 2010). Here we present preliminary results of elbow flexion agreement from one participant during a cup drinking task (see figure1). The agreement between markerless and marker-based methods was evaluated in RStudio using, Bland-Altman analysis (mean difference = -7.49 °, upper limits of agreement 20.87 °, lower limits of agreement -35.85 °); intra-class correlation coefficient (ICC = 0.91 °); and root mean squared error (RMSE = 16.30 °). Fig. 1: Elbow flexion angle during a cup drinking taskDownload : Download high-res image (95KB)Download : Download full-size image Our preliminary results suggest good agreement between markerless and marker-based motion capture for elbow flexion while performing a cup drinking task. The Kinect underestimates joint angles at local maxima and minima (see Fig. 1), as represented by a mean difference of -7.49°. The marker positions returned by Azure Kinect body tracking are also subject to sudden changes at extremes of motion that do not represent the movement.","PeriodicalId":94018,"journal":{"name":"Gait & posture","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing single camera markerless motion capture during upper limb activities of daily living\",\"authors\":\"Bradley Scott, Edward Chadwick, Mhairi McInnes, Dimitra Blana\",\"doi\":\"10.1016/j.gaitpost.2023.07.222\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In a recent scoping review (Scott et al., 2022) we discussed how single camera markerless motion capture (SCMoCap) may help to facilitate motion analysis in situations where it would otherwise not be possible, such as at-home rehabilitation for children with cerebral palsy (Kidziński et al., 2020), and more frequent data collection. However, few studies reported error of measurement in a clinically interpretable manner and there is little evidence assessing SCMoCap during upper limb activities of daily living. Presenting a comprehensive validation of SCMoCap, alongside clinically meaningful evaluation of results would be invaluable for clinicians and future researchers who are interested in implementing upper limb movement analysis into clinical practice (Philp et al., 2021). Are state-of-the-art single camera markerless motion capture methods suitable for measuring joint angles during a typical upper-limb functional assessment? Study participants were instructed to perform a compressive collection of physiological and functional movements that are typically part of an upper limb functional assessment. Movements were repeated 3 times for both the frontal and sagittal planes. Movements were recorded simultaneously with a 10-camera OptiTrack Prime 13 W marker-based motion capture setup (NaturalPoint, USA) and Azure Kinect camera (Microsoft, USA). An eSync2 synchronization device (NaturalPoint, USA) was used to avoid exposure interference between systems. Marker-based bony landmarks and joint centers were collected as recommended by the International Society of Biomechanics (Wu et al., 2005). Marker-based trajectories were processed using MotionMonitor xGen (Innovative Sports Training, USA), where a 20 Hz lowpass Butterworth filter was applied to marker positions. Markerless joint center positions were calculated using Azure Kinect body tracking. Markerless positions were filtered using a 10 Hz lowpass Butterworth filter, then upsampled to 120 Hz matching the OptiTrack recording frequency. Signals were time synchronized using cross correlation. Joint angles were calculated by solving inverse kinematics in OpenSim using Hamner’s model (Hamner, Seth & Delp, 2010). Here we present preliminary results of elbow flexion agreement from one participant during a cup drinking task (see figure1). The agreement between markerless and marker-based methods was evaluated in RStudio using, Bland-Altman analysis (mean difference = -7.49 °, upper limits of agreement 20.87 °, lower limits of agreement -35.85 °); intra-class correlation coefficient (ICC = 0.91 °); and root mean squared error (RMSE = 16.30 °). Fig. 1: Elbow flexion angle during a cup drinking taskDownload : Download high-res image (95KB)Download : Download full-size image Our preliminary results suggest good agreement between markerless and marker-based motion capture for elbow flexion while performing a cup drinking task. The Kinect underestimates joint angles at local maxima and minima (see Fig. 1), as represented by a mean difference of -7.49°. The marker positions returned by Azure Kinect body tracking are also subject to sudden changes at extremes of motion that do not represent the movement.\",\"PeriodicalId\":94018,\"journal\":{\"name\":\"Gait & posture\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Gait & posture\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.gaitpost.2023.07.222\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Gait & posture","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.gaitpost.2023.07.222","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在最近的范围审查(Scott et al., 2022)中,我们讨论了单摄像头无标记运动捕捉(SCMoCap)如何有助于在不可能的情况下促进运动分析,例如脑瘫儿童的家庭康复(Kidziński et al., 2020),以及更频繁的数据收集。然而,很少有研究报告以临床可解释的方式测量误差,并且很少有证据评估SCMoCap在上肢日常生活活动中的作用。对SCMoCap进行全面验证,并对结果进行有临床意义的评估,对于有兴趣将上肢运动分析应用于临床实践的临床医生和未来的研究人员来说,将是非常宝贵的(Philp et al, 2021)。最先进的单摄像头无标记运动捕捉方法是否适合在典型的上肢功能评估中测量关节角度?研究参与者被指示进行生理和功能运动的压缩集合,这是上肢功能评估的典型组成部分。额、矢状面重复运动3次。使用10个摄像头OptiTrack Prime 13w基于标记的动作捕捉装置(NaturalPoint,美国)和Azure Kinect摄像头(Microsoft,美国)同时记录运动。采用eSync2同步设备(NaturalPoint, USA)避免系统间的暴露干扰。根据国际生物力学学会(International Society of Biomechanics)的建议,收集基于标记物的骨骼地标和关节中心(Wu et al., 2005)。基于标记的轨迹使用MotionMonitor xGen (Innovative Sports Training, USA)进行处理,其中20 Hz低通巴特沃斯滤波器应用于标记位置。使用Azure Kinect身体跟踪计算无标记关节中心位置。使用10hz低通巴特沃斯滤波器对无标记位置进行滤波,然后上采样到120hz,与OptiTrack记录频率相匹配。信号使用互相关进行时间同步。利用Hamner的模型(Hamner, Seth & Delp, 2010)在OpenSim中求解逆运动学计算关节角。在这里,我们提出了一个参与者在喝杯任务期间肘关节弯曲协议的初步结果(见图1)。在RStudio中使用Bland-Altman分析评估无标记法和基于标记法的一致性(平均差= -7.49°,一致性上限20.87°,一致性下限-35.85°);类内相关系数(ICC = 0.91°);均方根误差(RMSE = 16.30°)。我们的初步结果表明,无标记和基于标记的动作捕捉在完成一杯饮料任务时肘关节屈曲的效果之间有很好的一致性。Kinect在局部最大值和最小值处低估了关节角度(见图1),平均差值为-7.49°。Azure Kinect身体跟踪返回的标记位置也会受到极端运动的突然变化的影响,这并不代表运动。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Assessing single camera markerless motion capture during upper limb activities of daily living
In a recent scoping review (Scott et al., 2022) we discussed how single camera markerless motion capture (SCMoCap) may help to facilitate motion analysis in situations where it would otherwise not be possible, such as at-home rehabilitation for children with cerebral palsy (Kidziński et al., 2020), and more frequent data collection. However, few studies reported error of measurement in a clinically interpretable manner and there is little evidence assessing SCMoCap during upper limb activities of daily living. Presenting a comprehensive validation of SCMoCap, alongside clinically meaningful evaluation of results would be invaluable for clinicians and future researchers who are interested in implementing upper limb movement analysis into clinical practice (Philp et al., 2021). Are state-of-the-art single camera markerless motion capture methods suitable for measuring joint angles during a typical upper-limb functional assessment? Study participants were instructed to perform a compressive collection of physiological and functional movements that are typically part of an upper limb functional assessment. Movements were repeated 3 times for both the frontal and sagittal planes. Movements were recorded simultaneously with a 10-camera OptiTrack Prime 13 W marker-based motion capture setup (NaturalPoint, USA) and Azure Kinect camera (Microsoft, USA). An eSync2 synchronization device (NaturalPoint, USA) was used to avoid exposure interference between systems. Marker-based bony landmarks and joint centers were collected as recommended by the International Society of Biomechanics (Wu et al., 2005). Marker-based trajectories were processed using MotionMonitor xGen (Innovative Sports Training, USA), where a 20 Hz lowpass Butterworth filter was applied to marker positions. Markerless joint center positions were calculated using Azure Kinect body tracking. Markerless positions were filtered using a 10 Hz lowpass Butterworth filter, then upsampled to 120 Hz matching the OptiTrack recording frequency. Signals were time synchronized using cross correlation. Joint angles were calculated by solving inverse kinematics in OpenSim using Hamner’s model (Hamner, Seth & Delp, 2010). Here we present preliminary results of elbow flexion agreement from one participant during a cup drinking task (see figure1). The agreement between markerless and marker-based methods was evaluated in RStudio using, Bland-Altman analysis (mean difference = -7.49 °, upper limits of agreement 20.87 °, lower limits of agreement -35.85 °); intra-class correlation coefficient (ICC = 0.91 °); and root mean squared error (RMSE = 16.30 °). Fig. 1: Elbow flexion angle during a cup drinking taskDownload : Download high-res image (95KB)Download : Download full-size image Our preliminary results suggest good agreement between markerless and marker-based motion capture for elbow flexion while performing a cup drinking task. The Kinect underestimates joint angles at local maxima and minima (see Fig. 1), as represented by a mean difference of -7.49°. The marker positions returned by Azure Kinect body tracking are also subject to sudden changes at extremes of motion that do not represent the movement.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信