Philipp Reitz, Tobias Veihelmann, Norman Franchi, Maximilian Lübke
{"title":"双雷达视觉:物联网雷达网络中高级目标检测的特征融合方法","authors":"Philipp Reitz, Tobias Veihelmann, Norman Franchi, Maximilian Lübke","doi":"10.1016/j.mlwa.2025.100703","DOIUrl":null,"url":null,"abstract":"<div><div>60<!--> <!-->GHz radar technology is one of the most promising movement detector solutions for Internet of Things (IoT) applications. However, challenges remain in accurately classifying different objects and detecting small objects in a multi-target scenario. This work investigates whether sensor fusion between multiple radars can enhance object detection and classification performance. A one-stage detection architecture, designed based on the features of the latest YOLO generations, is used to perform fusion based on range-Doppler (RD) maps of two non-coherent spatially separated radars. A complete physical 3D propagation simulation using ray tracing evaluates the fusion methods. This approach enables precise ground truth, as all unprocessed signal components are known, and guarantees a consistent, error-free reference. Results demonstrate that dynamic, attention-based fusion significantly improves detection and classification compared to static fusion in homogeneous and heterogeneous radar setups.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100703"},"PeriodicalIF":4.9000,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dual radar vision: A feature fusion approach for advanced object detection in IoT radar networks\",\"authors\":\"Philipp Reitz, Tobias Veihelmann, Norman Franchi, Maximilian Lübke\",\"doi\":\"10.1016/j.mlwa.2025.100703\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>60<!--> <!-->GHz radar technology is one of the most promising movement detector solutions for Internet of Things (IoT) applications. However, challenges remain in accurately classifying different objects and detecting small objects in a multi-target scenario. This work investigates whether sensor fusion between multiple radars can enhance object detection and classification performance. A one-stage detection architecture, designed based on the features of the latest YOLO generations, is used to perform fusion based on range-Doppler (RD) maps of two non-coherent spatially separated radars. A complete physical 3D propagation simulation using ray tracing evaluates the fusion methods. This approach enables precise ground truth, as all unprocessed signal components are known, and guarantees a consistent, error-free reference. Results demonstrate that dynamic, attention-based fusion significantly improves detection and classification compared to static fusion in homogeneous and heterogeneous radar setups.</div></div>\",\"PeriodicalId\":74093,\"journal\":{\"name\":\"Machine learning with applications\",\"volume\":\"21 \",\"pages\":\"Article 100703\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine learning with applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666827025000866\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning with applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666827025000866","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dual radar vision: A feature fusion approach for advanced object detection in IoT radar networks
60 GHz radar technology is one of the most promising movement detector solutions for Internet of Things (IoT) applications. However, challenges remain in accurately classifying different objects and detecting small objects in a multi-target scenario. This work investigates whether sensor fusion between multiple radars can enhance object detection and classification performance. A one-stage detection architecture, designed based on the features of the latest YOLO generations, is used to perform fusion based on range-Doppler (RD) maps of two non-coherent spatially separated radars. A complete physical 3D propagation simulation using ray tracing evaluates the fusion methods. This approach enables precise ground truth, as all unprocessed signal components are known, and guarantees a consistent, error-free reference. Results demonstrate that dynamic, attention-based fusion significantly improves detection and classification compared to static fusion in homogeneous and heterogeneous radar setups.