Generative AI-Empowered RFID Sensing for 3D Human Pose Augmentation and Completion

IF 6.3 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Ziqi Wang;Shiwen Mao
{"title":"Generative AI-Empowered RFID Sensing for 3D Human Pose Augmentation and Completion","authors":"Ziqi Wang;Shiwen Mao","doi":"10.1109/OJCOMS.2025.3539705","DOIUrl":null,"url":null,"abstract":"Collecting paired Radio Frequency Identification (RFID) data and corresponding 3D human pose data is challenging due to practical limitations, such as the discomfort of wearing numerous RFID tags and the inconvenience of timestamp synchronization between RFID and camera data. We propose a novel framework that leverages latent diffusion transformers to generate high-quality, diverse RFID sensing data across multiple classes. This synthetic data augments limited datasets by training a transformer-based kinematics predictor to estimate 3D poses with temporal smoothness from RFID data. Most importantly, we introduce a latent diffusion transformer training stage with cross-attention conditioning and an inference design of two-stage velocity alignment to accurately infer missing joints in skeletal poses, completing full 25-joint configurations from partial 12-joint inputs. This is the first method to detect >20 distinct skeletal joints using Generative-AI technologies for any wireless sensing-based continuous 3D human pose estimation (HPE) task. The application is particularly important for RFID-based systems, which typically capture limited joint information due to RFID sensing constraints. Our approach can extend the applicability of wireless-based pose estimation in scenarios where collecting extensive paired datasets is impractical and achieving more fine-grained joint information is infeasible, such as pedestrian and health monitoring in occluded environments.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"1-1"},"PeriodicalIF":6.3000,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10877927","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Communications Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10877927/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Collecting paired Radio Frequency Identification (RFID) data and corresponding 3D human pose data is challenging due to practical limitations, such as the discomfort of wearing numerous RFID tags and the inconvenience of timestamp synchronization between RFID and camera data. We propose a novel framework that leverages latent diffusion transformers to generate high-quality, diverse RFID sensing data across multiple classes. This synthetic data augments limited datasets by training a transformer-based kinematics predictor to estimate 3D poses with temporal smoothness from RFID data. Most importantly, we introduce a latent diffusion transformer training stage with cross-attention conditioning and an inference design of two-stage velocity alignment to accurately infer missing joints in skeletal poses, completing full 25-joint configurations from partial 12-joint inputs. This is the first method to detect >20 distinct skeletal joints using Generative-AI technologies for any wireless sensing-based continuous 3D human pose estimation (HPE) task. The application is particularly important for RFID-based systems, which typically capture limited joint information due to RFID sensing constraints. Our approach can extend the applicability of wireless-based pose estimation in scenarios where collecting extensive paired datasets is impractical and achieving more fine-grained joint information is infeasible, such as pedestrian and health monitoring in occluded environments.
三维人体姿态增强和完成的生成式ai授权RFID传感
由于实际的限制,例如佩戴大量RFID标签带来的不适以及RFID与相机数据之间时间戳同步的不便,收集成对的射频识别(RFID)数据和相应的3D人体姿势数据具有挑战性。我们提出了一种新的框架,利用潜在扩散变压器在多个类别中生成高质量,多样化的RFID传感数据。该合成数据通过训练基于变压器的运动学预测器来从RFID数据中估计具有时间平滑性的3D姿势,从而增强了有限的数据集。最重要的是,我们引入了一个具有交叉注意条件反射的潜在扩散变形训练阶段和一个两阶段速度对准的推理设计,以准确推断骨骼姿势中缺失的关节,从部分12个关节输入完成完整的25个关节配置。这是第一种使用生成人工智能技术检测bbbb20个不同骨骼关节的方法,可用于任何基于无线传感的连续3D人体姿势估计(HPE)任务。该应用对于基于RFID的系统尤其重要,由于RFID传感限制,该系统通常捕获有限的联合信息。我们的方法可以扩展基于无线姿态估计的适用性,在收集广泛的配对数据集是不切实际的,获得更细粒度的联合信息是不可行的情况下,例如闭塞环境中的行人和健康监测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
13.70
自引率
3.80%
发文量
94
审稿时长
10 weeks
期刊介绍: The IEEE Open Journal of the Communications Society (OJ-COMS) is an open access, all-electronic journal that publishes original high-quality manuscripts on advances in the state of the art of telecommunications systems and networks. The papers in IEEE OJ-COMS are included in Scopus. Submissions reporting new theoretical findings (including novel methods, concepts, and studies) and practical contributions (including experiments and development of prototypes) are welcome. Additionally, survey and tutorial articles are considered. The IEEE OJCOMS received its debut impact factor of 7.9 according to the Journal Citation Reports (JCR) 2023. The IEEE Open Journal of the Communications Society covers science, technology, applications and standards for information organization, collection and transfer using electronic, optical and wireless channels and networks. Some specific areas covered include: Systems and network architecture, control and management Protocols, software, and middleware Quality of service, reliability, and security Modulation, detection, coding, and signaling Switching and routing Mobile and portable communications Terminals and other end-user devices Networks for content distribution and distributed computing Communications-based distributed resources control.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信