RadSpecFusion: Dynamic attention weighting for multi-radar human activity recognition

IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Ayesha Ibrahim , Muhammad Zakir Khan , Muhammad Imran , Hadi Larijani , Qammer H. Abbasi , Muhammad Usman
{"title":"RadSpecFusion: Dynamic attention weighting for multi-radar human activity recognition","authors":"Ayesha Ibrahim ,&nbsp;Muhammad Zakir Khan ,&nbsp;Muhammad Imran ,&nbsp;Hadi Larijani ,&nbsp;Qammer H. Abbasi ,&nbsp;Muhammad Usman","doi":"10.1016/j.iot.2025.101682","DOIUrl":null,"url":null,"abstract":"<div><div>This paper presents RadSpecFusion, a novel dynamic attention-based fusion architecture for multi-radar human activity recognition (HAR). Our method learns activity-specific importance weights for each radar modality (24 GHz, 77 GHz, and Xethru sensors). Unlike existing concatenation or averaging approaches, our method dynamically adapts radar contributions based on motion characteristics. This addresses cross-frequency generalization challenges, where transfer learning methods achieve only 11%–34% accuracy. Using the CI4R dataset with spectrograms from 11 activities, our approach achieves 99.21% accuracy, representing a 15.8% improvement over existing fusion methods (83.4%). This demonstrates that different radar frequencies capture complementary information about human motion. Ablation studies show that while the three-radar system optimizes performance, dual-radar combinations achieve comparable accuracy (24GHz+77GHz: 96.1%, 24GHz+Xethru: 95.8%, 77GHz+Xethru: 97.2%), enabling flexible deployment for resource-constrained applications. The attention mechanism reveals interpretable patterns: 77 GHz radar receives higher weights for fine movements (superior Doppler resolution), while 24 GHz dominates gross body movements (better range resolution). The system maintains 71.4% accuracy at 10 dB SNR, demonstrating environmental robustness. This research establishes a new paradigm for multimodal radar fusion, moving from cross-frequency transfer learning to adaptive fusion with implications for healthcare monitoring, smart environments, and security applications.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"33 ","pages":"Article 101682"},"PeriodicalIF":7.6000,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet of Things","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2542660525001969","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents RadSpecFusion, a novel dynamic attention-based fusion architecture for multi-radar human activity recognition (HAR). Our method learns activity-specific importance weights for each radar modality (24 GHz, 77 GHz, and Xethru sensors). Unlike existing concatenation or averaging approaches, our method dynamically adapts radar contributions based on motion characteristics. This addresses cross-frequency generalization challenges, where transfer learning methods achieve only 11%–34% accuracy. Using the CI4R dataset with spectrograms from 11 activities, our approach achieves 99.21% accuracy, representing a 15.8% improvement over existing fusion methods (83.4%). This demonstrates that different radar frequencies capture complementary information about human motion. Ablation studies show that while the three-radar system optimizes performance, dual-radar combinations achieve comparable accuracy (24GHz+77GHz: 96.1%, 24GHz+Xethru: 95.8%, 77GHz+Xethru: 97.2%), enabling flexible deployment for resource-constrained applications. The attention mechanism reveals interpretable patterns: 77 GHz radar receives higher weights for fine movements (superior Doppler resolution), while 24 GHz dominates gross body movements (better range resolution). The system maintains 71.4% accuracy at 10 dB SNR, demonstrating environmental robustness. This research establishes a new paradigm for multimodal radar fusion, moving from cross-frequency transfer learning to adaptive fusion with implications for healthcare monitoring, smart environments, and security applications.
RadSpecFusion:多雷达人体活动识别的动态注意力加权
提出了一种新的基于动态注意力的多雷达人体活动识别融合体系结构RadSpecFusion。我们的方法为每种雷达模式(24 GHz、77 GHz和Xethru传感器)学习特定于活动的重要性权重。与现有的串联或平均方法不同,我们的方法基于运动特征动态地适应雷达贡献。这解决了跨频率泛化的挑战,其中迁移学习方法只能达到11%-34%的准确率。使用包含11个活动的光谱图的CI4R数据集,我们的方法达到了99.21%的准确率,比现有的融合方法(83.4%)提高了15.8%。这表明,不同的雷达频率捕获有关人体运动的互补信息。消融研究表明,虽然三雷达系统优化了性能,但双雷达组合的精度相当(24GHz+77GHz: 96.1%, 24GHz+Xethru: 95.8%, 77GHz+Xethru: 97.2%),能够灵活部署资源受限的应用。注意机制揭示了可解释的模式:77 GHz雷达对精细运动(优越的多普勒分辨率)的权重更高,而24 GHz雷达对全身运动(更好的距离分辨率)的权重更高。该系统在10 dB信噪比下保持71.4%的精度,具有良好的环境鲁棒性。这项研究为多模态雷达融合建立了一个新的范例,从交叉频率迁移学习转向自适应融合,对医疗监控、智能环境和安全应用具有重要意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Internet of Things
Internet of Things Multiple-
CiteScore
3.60
自引率
5.10%
发文量
115
审稿时长
37 days
期刊介绍: Internet of Things; Engineering Cyber Physical Human Systems is a comprehensive journal encouraging cross collaboration between researchers, engineers and practitioners in the field of IoT & Cyber Physical Human Systems. The journal offers a unique platform to exchange scientific information on the entire breadth of technology, science, and societal applications of the IoT. The journal will place a high priority on timely publication, and provide a home for high quality. Furthermore, IOT is interested in publishing topical Special Issues on any aspect of IOT.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信