Detection of moving small targets in infrared images for urban traffic monitoring

IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Juan Wang , Hao Yang , Zizhen Zhang , Nan Zhao , Jixiang Shao , Minghua Wu , Zhigang Ma , Jialu Zhu , Xu An Wang , Haina Song
{"title":"Detection of moving small targets in infrared images for urban traffic monitoring","authors":"Juan Wang ,&nbsp;Hao Yang ,&nbsp;Zizhen Zhang ,&nbsp;Nan Zhao ,&nbsp;Jixiang Shao ,&nbsp;Minghua Wu ,&nbsp;Zhigang Ma ,&nbsp;Jialu Zhu ,&nbsp;Xu An Wang ,&nbsp;Haina Song","doi":"10.1016/j.iot.2025.101673","DOIUrl":null,"url":null,"abstract":"<div><div>The Internet of Vehicles (IoV) and autonomous driving technologies require increasingly robust object detection capabilities, especially for small objects. However, reliably detecting small objects in urban traffic scenarios remains technically challenging under adverse weather conditions, including low illumination, rain, and snow. To address these challenges, we propose a fused IR–visible imaging approach using an enhanced YOLOv9 architecture. The proposed method employs a dual-branch semantic enhancement architecture, which achieves dynamic inter-modal feature weighting through a channel attention mechanism. The visible branch preserves texture details, while the infrared branch extracts thermal radiation characteristics, followed by multi-scale feature-level fusion. Firstly, we present UR-YOLO designed for detecting small targets in urban traffic environments. Secondly, we propose a novel DeeperFuse module that incorporates dual-branch semantic enhancement and channel attention mechanisms for effective multimodal feature fusion. Finally, by jointly optimizing fusion and detection losses, the method preserves critical details, enhances clarity and contrast. Experimental evaluation on the M<sup>\\relax \\special {t4ht=<sup>3</sup>}</sup>FD dataset demonstrates improved detection performance relative to the baseline YOLOv9 model. The results show an increase of 1.4 percentage points in mAP (from 83.3% to 84.7%) and 2.2 percentage points in <span><math><mrow><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mi>s</mi><mi>m</mi><mi>a</mi><mi>l</mi><mi>l</mi></mrow></msub></mrow></math></span> (from 51.6% to 53.8%). Furthermore, our method achieves real-time processing at 30 FPS, making it suitable for deployment in urban autonomous driving scenarios. Future work will focus on enhancing model performance via multimodal fusion, lightweight design, and multi-scale feature learning. We will also develop diverse datasets to advance autonomous driving perception in complex environments.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"33 ","pages":"Article 101673"},"PeriodicalIF":6.0000,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet of Things","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2542660525001878","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The Internet of Vehicles (IoV) and autonomous driving technologies require increasingly robust object detection capabilities, especially for small objects. However, reliably detecting small objects in urban traffic scenarios remains technically challenging under adverse weather conditions, including low illumination, rain, and snow. To address these challenges, we propose a fused IR–visible imaging approach using an enhanced YOLOv9 architecture. The proposed method employs a dual-branch semantic enhancement architecture, which achieves dynamic inter-modal feature weighting through a channel attention mechanism. The visible branch preserves texture details, while the infrared branch extracts thermal radiation characteristics, followed by multi-scale feature-level fusion. Firstly, we present UR-YOLO designed for detecting small targets in urban traffic environments. Secondly, we propose a novel DeeperFuse module that incorporates dual-branch semantic enhancement and channel attention mechanisms for effective multimodal feature fusion. Finally, by jointly optimizing fusion and detection losses, the method preserves critical details, enhances clarity and contrast. Experimental evaluation on the M\relax \special {t4ht=3}FD dataset demonstrates improved detection performance relative to the baseline YOLOv9 model. The results show an increase of 1.4 percentage points in mAP (from 83.3% to 84.7%) and 2.2 percentage points in APsmall (from 51.6% to 53.8%). Furthermore, our method achieves real-time processing at 30 FPS, making it suitable for deployment in urban autonomous driving scenarios. Future work will focus on enhancing model performance via multimodal fusion, lightweight design, and multi-scale feature learning. We will also develop diverse datasets to advance autonomous driving perception in complex environments.
城市交通监控红外图像中运动小目标的检测
车联网(IoV)和自动驾驶技术需要越来越强大的物体检测能力,特别是对于小型物体。然而,在恶劣天气条件下(包括低照度、雨雪天气),在城市交通场景中可靠地检测小型物体在技术上仍然具有挑战性。为了应对这些挑战,我们提出了一种使用增强型YOLOv9架构的融合红外-可见光成像方法。该方法采用双分支语义增强架构,通过通道关注机制实现动态多模态特征加权。可见光分支保留纹理细节,红外分支提取热辐射特征,然后进行多尺度特征级融合。首先,我们提出了一种用于城市交通环境中小目标检测的UR-YOLO。其次,我们提出了一种新的DeeperFuse模块,该模块结合了双分支语义增强和通道注意机制,用于有效的多模态特征融合。最后,通过联合优化融合和检测损失,该方法保留了关键细节,提高了清晰度和对比度。在M\relax \special {t4ht=3}FD数据集上的实验评估表明,相对于基线YOLOv9模型,检测性能有所提高。结果表明,mAP增加了1.4个百分点(从83.3%增加到84.7%),APsmall增加了2.2个百分点(从51.6%增加到53.8%)。此外,我们的方法实现了30 FPS的实时处理,使其适合部署在城市自动驾驶场景中。未来的工作将集中于通过多模态融合、轻量化设计和多尺度特征学习来增强模型性能。我们还将开发不同的数据集,以推进复杂环境下的自动驾驶感知。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Internet of Things
Internet of Things Multiple-
CiteScore
3.60
自引率
5.10%
发文量
115
审稿时长
37 days
期刊介绍: Internet of Things; Engineering Cyber Physical Human Systems is a comprehensive journal encouraging cross collaboration between researchers, engineers and practitioners in the field of IoT & Cyber Physical Human Systems. The journal offers a unique platform to exchange scientific information on the entire breadth of technology, science, and societal applications of the IoT. The journal will place a high priority on timely publication, and provide a home for high quality. Furthermore, IOT is interested in publishing topical Special Issues on any aspect of IOT.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信