Progressive Inertial Poser: Progressive Real-Time Kinematic Chain Estimation for 3-D Full-Body Pose From Three IMU Sensors

IF 5.6 2区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Zunjie Zhu;Yan Zhao;Yihan Hu;Guoxiang Wang;Hai Qiu;Bolun Zheng;Chenggang Yan;Feng Xu
{"title":"Progressive Inertial Poser: Progressive Real-Time Kinematic Chain Estimation for 3-D Full-Body Pose From Three IMU Sensors","authors":"Zunjie Zhu;Yan Zhao;Yihan Hu;Guoxiang Wang;Hai Qiu;Bolun Zheng;Chenggang Yan;Feng Xu","doi":"10.1109/TIM.2025.3570339","DOIUrl":null,"url":null,"abstract":"The motion capture system that supports full-body virtual representation is of key significance for virtual reality. Compared with vision-based systems, full-body pose estimation from sparse tracking signals is not limited by environmental conditions or recording range. However, previous works either face the challenge of wearing additional sensors on the pelvis and lower body or rely on external visual sensors to obtain global positions of key joints. To improve the practicality of the technology for virtual reality applications, we estimate full-body poses using only inertial data obtained from three inertial measurement unit (IMU) sensors worn on the head and wrists, thereby reducing the complexity of the hardware system. In this work, we propose a method called progressive inertial poser (ProgIP) for human pose estimation, which combines neural network estimation with a human dynamics model, considers the hierarchical structure of the kinematic chain, and employs a multistage progressive network estimation with increased depth to reconstruct full-body motion in real time. The encoder combines Transformer encoder and bidirectional LSTM (TE-biLSTM) to flexibly capture the temporal dependencies of the inertial sequence, while the decoder based on multilayer perceptrons (MLPs) transforms high-dimensional features and accurately projects them onto skinned multiperson linear (SMPL) model parameters. Quantitative and qualitative experimental results on multiple public datasets show that our method outperforms state-of-the-art methods with the same inputs and is comparable to recent works using six IMU sensors.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-13"},"PeriodicalIF":5.6000,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Instrumentation and Measurement","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11018877/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

The motion capture system that supports full-body virtual representation is of key significance for virtual reality. Compared with vision-based systems, full-body pose estimation from sparse tracking signals is not limited by environmental conditions or recording range. However, previous works either face the challenge of wearing additional sensors on the pelvis and lower body or rely on external visual sensors to obtain global positions of key joints. To improve the practicality of the technology for virtual reality applications, we estimate full-body poses using only inertial data obtained from three inertial measurement unit (IMU) sensors worn on the head and wrists, thereby reducing the complexity of the hardware system. In this work, we propose a method called progressive inertial poser (ProgIP) for human pose estimation, which combines neural network estimation with a human dynamics model, considers the hierarchical structure of the kinematic chain, and employs a multistage progressive network estimation with increased depth to reconstruct full-body motion in real time. The encoder combines Transformer encoder and bidirectional LSTM (TE-biLSTM) to flexibly capture the temporal dependencies of the inertial sequence, while the decoder based on multilayer perceptrons (MLPs) transforms high-dimensional features and accurately projects them onto skinned multiperson linear (SMPL) model parameters. Quantitative and qualitative experimental results on multiple public datasets show that our method outperforms state-of-the-art methods with the same inputs and is comparable to recent works using six IMU sensors.
渐进式惯性姿态:基于三个IMU传感器的三维全身姿态的渐进式实时运动链估计
支持全身虚拟表现的动作捕捉系统对虚拟现实具有重要意义。与基于视觉的系统相比,基于稀疏跟踪信号的全身姿态估计不受环境条件或记录范围的限制。然而,以往的工作要么面临在骨盆和下半身佩戴额外传感器的挑战,要么依赖外部视觉传感器来获得关键关节的全局位置。为了提高该技术在虚拟现实应用中的实用性,我们仅使用佩戴在头部和手腕上的三个惯性测量单元(IMU)传感器获得的惯性数据来估计全身姿势,从而降低了硬件系统的复杂性。在这项工作中,我们提出了一种称为渐进惯性姿势(ProgIP)的人体姿态估计方法,该方法将神经网络估计与人体动力学模型相结合,考虑运动链的层次结构,采用多级深度渐进网络估计实时重建全身运动。编码器结合变压器编码器和双向LSTM (TE-biLSTM)灵活捕获惯性序列的时间依赖关系,而解码器基于多层感知器(mlp)转换高维特征并精确投影到蒙皮的多人线性(SMPL)模型参数上。在多个公共数据集上的定量和定性实验结果表明,我们的方法在相同输入下优于最先进的方法,并可与最近使用六个IMU传感器的工作相媲美。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Instrumentation and Measurement
IEEE Transactions on Instrumentation and Measurement 工程技术-工程:电子与电气
CiteScore
9.00
自引率
23.20%
发文量
1294
审稿时长
3.9 months
期刊介绍: Papers are sought that address innovative solutions to the development and use of electrical and electronic instruments and equipment to measure, monitor and/or record physical phenomena for the purpose of advancing measurement science, methods, functionality and applications. The scope of these papers may encompass: (1) theory, methodology, and practice of measurement; (2) design, development and evaluation of instrumentation and measurement systems and components used in generating, acquiring, conditioning and processing signals; (3) analysis, representation, display, and preservation of the information obtained from a set of measurements; and (4) scientific and technical support to establishment and maintenance of technical standards in the field of Instrumentation and Measurement.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信