Human Activity Recognition Based on Feature Fusion of Millimeter Wave Radar and Inertial Navigation

IF 6.9 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Jiajia Shi;Yihan Zhu;Jiaqing He;Zhihuo Xu;Liu Chu;Robin Braun;Quan Shi
{"title":"Human Activity Recognition Based on Feature Fusion of Millimeter Wave Radar and Inertial Navigation","authors":"Jiajia Shi;Yihan Zhu;Jiaqing He;Zhihuo Xu;Liu Chu;Robin Braun;Quan Shi","doi":"10.1109/JMW.2025.3539957","DOIUrl":null,"url":null,"abstract":"Human activity recognition (HAR) technology is increasingly utilized in domains such as security surveillance, nursing home monitoring, and health assessment. The integration of multi-sensor data improves recognition efficiency and the precision of behavioral analysis by offering a more comprehensive view of human activities. However, challenges arise due to the diversity of data types, dimensions, sampling rates, and environmental disturbances, which complicate feature extraction and data fusion. To address these challenges, we propose a HAR approach that fuses millimeter-wave radar and inertial navigation data using bimodal neural networks. We first design a comprehensive data acquisition framework that integrates both radar and inertial navigation systems, with a focus on ensuring time synchronization. The radar data undergoes range compression, moving target indication (MTI), short-time Fourier transforms (STFT), and wavelet transforms to reduce noise and improve quality and stability. The inertial navigation data is refined through moving average filtering and hysteresis compensation to enhance accuracy and reduce latency. Next, we introduce the Radar-Inertial Navigation Multi-modal Fusion Attention (T-C-RIMFA) model. In this model, a Convolutional Neural Network (CNN) processes the 1D inertial navigation data for feature extraction, while a channel attention mechanism prioritizes features from different convolutional kernels. Simultaneously, a Vision Transformer (ViT) interprets features from radar-derived micro-Doppler images. Experimental results demonstrate significant improvements in HAR tasks, achieving an accuracy of 0.988. This approach effectively leverages the strengths of both sensors, enhancing the accuracy and robustness of HAR systems.","PeriodicalId":93296,"journal":{"name":"IEEE journal of microwaves","volume":"5 2","pages":"409-424"},"PeriodicalIF":6.9000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10916995","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal of microwaves","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10916995/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Human activity recognition (HAR) technology is increasingly utilized in domains such as security surveillance, nursing home monitoring, and health assessment. The integration of multi-sensor data improves recognition efficiency and the precision of behavioral analysis by offering a more comprehensive view of human activities. However, challenges arise due to the diversity of data types, dimensions, sampling rates, and environmental disturbances, which complicate feature extraction and data fusion. To address these challenges, we propose a HAR approach that fuses millimeter-wave radar and inertial navigation data using bimodal neural networks. We first design a comprehensive data acquisition framework that integrates both radar and inertial navigation systems, with a focus on ensuring time synchronization. The radar data undergoes range compression, moving target indication (MTI), short-time Fourier transforms (STFT), and wavelet transforms to reduce noise and improve quality and stability. The inertial navigation data is refined through moving average filtering and hysteresis compensation to enhance accuracy and reduce latency. Next, we introduce the Radar-Inertial Navigation Multi-modal Fusion Attention (T-C-RIMFA) model. In this model, a Convolutional Neural Network (CNN) processes the 1D inertial navigation data for feature extraction, while a channel attention mechanism prioritizes features from different convolutional kernels. Simultaneously, a Vision Transformer (ViT) interprets features from radar-derived micro-Doppler images. Experimental results demonstrate significant improvements in HAR tasks, achieving an accuracy of 0.988. This approach effectively leverages the strengths of both sensors, enhancing the accuracy and robustness of HAR systems.
基于毫米波雷达与惯性导航特征融合的人体活动识别
人类活动识别(HAR)技术越来越多地应用于安全监控、养老院监控和健康评估等领域。多传感器数据的集成通过提供更全面的人类活动视图,提高了识别效率和行为分析的精度。然而,由于数据类型、维度、采样率和环境干扰的多样性,使得特征提取和数据融合变得复杂,从而带来了挑战。为了应对这些挑战,我们提出了一种HAR方法,该方法使用双峰神经网络融合毫米波雷达和惯性导航数据。我们首先设计了一个综合的数据采集框架,集成了雷达和惯性导航系统,重点是确保时间同步。雷达数据经过距离压缩、运动目标指示(MTI)、短时傅立叶变换(STFT)和小波变换来降低噪声,提高质量和稳定性。通过移动平均滤波和迟滞补偿对惯性导航数据进行细化,提高精度,降低时延。接下来,我们介绍了雷达-惯性导航多模态融合注意(T-C-RIMFA)模型。在该模型中,卷积神经网络(CNN)处理一维惯性导航数据进行特征提取,通道注意机制对来自不同卷积核的特征进行优先级排序。同时,视觉转换器(ViT)解释雷达微多普勒图像的特征。实验结果表明,该方法在HAR任务上有显著提高,准确率达到0.988。这种方法有效地利用了两种传感器的优势,提高了HAR系统的准确性和鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.70
自引率
0.00%
发文量
0
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信