Denoising of Spatiotemporal Gait Waveforms from Motion-Sensing Depth Camera using Least Mean Square (LMS) Adaptive Filter

Karl Vincent G. Castillo, N. Mendoza, Chelsea Andrea S. Morales, Allen Dominic A. Perez, Jansen Yna L. Unisa, A. Cruz
{"title":"Denoising of Spatiotemporal Gait Waveforms from Motion-Sensing Depth Camera using Least Mean Square (LMS) Adaptive Filter","authors":"Karl Vincent G. Castillo, N. Mendoza, Chelsea Andrea S. Morales, Allen Dominic A. Perez, Jansen Yna L. Unisa, A. Cruz","doi":"10.1109/HNICEM.2018.8666294","DOIUrl":null,"url":null,"abstract":"Quantitative gait assessment is made possible through the use of instruments such as 3D motion capture systems (Mo-Cap) and its cheaper counterpart, an RGB-D camera. The cost-effectiveness of the RGB-D camera makes it a more practical instrument to use in most clinical settings but it is not as accurate as the Mo-Cap. This paper presents denoising of gait waveforms obtained from the most common parameters produced by RGB-D camera using LMS adaptive filter. The data used came from 14 study volunteers whose normal walking gait were recorded using VICON cameras and Microsoft Kinect v2 sensors. Spatiotemporal gait parameters were calculated from the two gait waveforms. The adaptive filter was trained using training dataset to create a filter model that was then used for the testing phase. Two given data sets, unfiltered and filtered gait parameters, were compared to the motion capture system gait parameters using statistical tools. Unfiltered parameters from the RGB-D camera exhibit significant difference with the Mo-Cap parameters at an average percent error of 26.83%. By calculating statistical values such as mean, standard deviation and root mean square error (RMSE), the findings confirmed that filtered parameter data improved at an average of 11.34% and there is a reduction of rejected parameters by 6 using t-test.","PeriodicalId":426103,"journal":{"name":"2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology,Communication and Control, Environment and Management (HNICEM)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology,Communication and Control, Environment and Management (HNICEM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HNICEM.2018.8666294","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Quantitative gait assessment is made possible through the use of instruments such as 3D motion capture systems (Mo-Cap) and its cheaper counterpart, an RGB-D camera. The cost-effectiveness of the RGB-D camera makes it a more practical instrument to use in most clinical settings but it is not as accurate as the Mo-Cap. This paper presents denoising of gait waveforms obtained from the most common parameters produced by RGB-D camera using LMS adaptive filter. The data used came from 14 study volunteers whose normal walking gait were recorded using VICON cameras and Microsoft Kinect v2 sensors. Spatiotemporal gait parameters were calculated from the two gait waveforms. The adaptive filter was trained using training dataset to create a filter model that was then used for the testing phase. Two given data sets, unfiltered and filtered gait parameters, were compared to the motion capture system gait parameters using statistical tools. Unfiltered parameters from the RGB-D camera exhibit significant difference with the Mo-Cap parameters at an average percent error of 26.83%. By calculating statistical values such as mean, standard deviation and root mean square error (RMSE), the findings confirmed that filtered parameter data improved at an average of 11.34% and there is a reduction of rejected parameters by 6 using t-test.
基于最小均方(LMS)自适应滤波的深度体感相机时空步态波形去噪
通过使用3D动作捕捉系统(Mo-Cap)和更便宜的RGB-D相机等仪器,定量步态评估成为可能。RGB-D相机的成本效益使其成为在大多数临床环境中使用的更实用的仪器,但它不如运动帽准确。采用LMS自适应滤波方法对RGB-D相机产生的最常见参数所得到的步态波形进行去噪。使用的数据来自14名研究志愿者,他们的正常步态是用VICON相机和微软Kinect v2传感器记录下来的。根据两种步态波形计算步态的时空参数。使用训练数据集训练自适应过滤器以创建过滤器模型,然后用于测试阶段。使用统计工具将两个给定的数据集(未过滤和过滤的步态参数)与运动捕捉系统的步态参数进行比较。RGB-D相机的未滤波参数与Mo-Cap参数有显著差异,平均误差为26.83%。通过计算均值、标准差和均方根误差(RMSE)等统计值,结果证实,过滤后的参数数据平均改善了11.34%,使用t检验,拒绝参数的数量减少了6。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信