Advanced Millimeter Wave Radar-Based Human Pose Estimation Enabled by a Deep Learning Neural Network Trained With Optical Motion Capture Ground Truth Data

IF 6.9 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Lukas Engel;Jonas Mueller;Eduardo Javier Feria Rendon;Eva Dorschky;Daniel Krauss;Ingrid Ullmann;Bjoern M. Eskofier;Martin Vossiek
{"title":"Advanced Millimeter Wave Radar-Based Human Pose Estimation Enabled by a Deep Learning Neural Network Trained With Optical Motion Capture Ground Truth Data","authors":"Lukas Engel;Jonas Mueller;Eduardo Javier Feria Rendon;Eva Dorschky;Daniel Krauss;Ingrid Ullmann;Bjoern M. Eskofier;Martin Vossiek","doi":"10.1109/JMW.2025.3535525","DOIUrl":null,"url":null,"abstract":"This paper presents a deep learning-enabled method for human pose estimation using radar target lists, obtained through a low-cost radar system with three transmitters and four receivers in a multiple-input multiple-output setup. We address challenges in previous research that often relied on extracting ground truth poses from RGB data, which are constrained by the need for 3D mapping and vulnerability to occlusions. To overcome these limitations, we utilized optical motion capture, which is widely recognized as the gold standard for precise human motion analysis. We conducted an extensive optical motion capture study involving various recorded movement activities, which resulted in <italic>mmRadPose</i>, a new dataset that enhances existing benchmarks for radar-based pose estimation. This dataset has been made publicly accessible. Building on this approach, we designed an application-tailored radar signal processing chain to generate suitable input for the machine learning algorithm. We further developed an attentional recurrent-based deep learning model, <italic>PntPoseAT</i>, which predicts 24 keypoints of human poses using radar target lists. We employed cross validation to thoroughly evaluate the model. This model surpasses previous approaches and achieves an average mean per-joint position error of <inline-formula><tex-math>$6.49 \\,\\mathrm{c}\\mathrm{m}$</tex-math></inline-formula> with a standard deviation of <inline-formula><tex-math>$3.74 \\,\\mathrm{c}\\mathrm{m}$</tex-math></inline-formula> on totally unseen test data. This excellent accuracy of the reconstructed keypoint positions is particularly remarkable when you consider that a very simple radar was used for the measurements. Additionally, we conducted a comprehensive analysis of the model's performance by exploring aspects such as network architecture, the use of long short-term memory versus gated recurrent units, input data selection, and the integration of multi-head self-attention mechanisms.","PeriodicalId":93296,"journal":{"name":"IEEE journal of microwaves","volume":"5 2","pages":"373-387"},"PeriodicalIF":6.9000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10904474","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal of microwaves","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10904474/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents a deep learning-enabled method for human pose estimation using radar target lists, obtained through a low-cost radar system with three transmitters and four receivers in a multiple-input multiple-output setup. We address challenges in previous research that often relied on extracting ground truth poses from RGB data, which are constrained by the need for 3D mapping and vulnerability to occlusions. To overcome these limitations, we utilized optical motion capture, which is widely recognized as the gold standard for precise human motion analysis. We conducted an extensive optical motion capture study involving various recorded movement activities, which resulted in mmRadPose, a new dataset that enhances existing benchmarks for radar-based pose estimation. This dataset has been made publicly accessible. Building on this approach, we designed an application-tailored radar signal processing chain to generate suitable input for the machine learning algorithm. We further developed an attentional recurrent-based deep learning model, PntPoseAT, which predicts 24 keypoints of human poses using radar target lists. We employed cross validation to thoroughly evaluate the model. This model surpasses previous approaches and achieves an average mean per-joint position error of $6.49 \,\mathrm{c}\mathrm{m}$ with a standard deviation of $3.74 \,\mathrm{c}\mathrm{m}$ on totally unseen test data. This excellent accuracy of the reconstructed keypoint positions is particularly remarkable when you consider that a very simple radar was used for the measurements. Additionally, we conducted a comprehensive analysis of the model's performance by exploring aspects such as network architecture, the use of long short-term memory versus gated recurrent units, input data selection, and the integration of multi-head self-attention mechanisms.
利用光学运动捕捉地面实况数据训练的深度学习神经网络实现基于毫米波雷达的高级人体姿态估计
本文提出了一种利用雷达目标列表进行人体姿态估计的深度学习方法,该方法是通过在多输入多输出设置中具有三个发射器和四个接收器的低成本雷达系统获得的。我们解决了先前研究中的挑战,这些挑战通常依赖于从RGB数据中提取地面真态,这些数据受到3D映射需求和遮挡脆弱性的限制。为了克服这些限制,我们使用了光学运动捕捉,这被广泛认为是精确人体运动分析的黄金标准。我们进行了广泛的光学运动捕捉研究,涉及各种记录的运动活动,从而产生了mmRadPose,这是一个新的数据集,增强了基于雷达的姿势估计的现有基准。该数据集已公开提供。基于这种方法,我们设计了一个应用定制的雷达信号处理链,为机器学习算法生成合适的输入。我们进一步开发了一个基于注意递归的深度学习模型PntPoseAT,该模型使用雷达目标列表预测人体姿势的24个关键点。我们采用交叉验证来彻底评估模型。该模型超越了以前的方法,在完全看不见的测试数据上实现了平均每个关节位置误差为6.49,mathrm{c}, mathrm{m}$,标准差为3.74,mathrm{c}, mathrm{m}$。当您考虑到使用非常简单的雷达进行测量时,重建的关键点位置的这种出色的精度尤其显着。此外,我们通过探索网络架构、长短期记忆与门控循环单元的使用、输入数据选择以及多头自注意机制的集成等方面,对模型的性能进行了全面分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.70
自引率
0.00%
发文量
0
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信