A tree-based approach for visible and thermal sensor fusion in winter autonomous driving

IF 2.4 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jonathan Boisclair, Ali Amamou, Sousso Kelouwani, M. Zeshan Alam, Hedi Oueslati, Lotfi Zeghmi, Kodjo Agbossou
{"title":"A tree-based approach for visible and thermal sensor fusion in winter autonomous driving","authors":"Jonathan Boisclair, Ali Amamou, Sousso Kelouwani, M. Zeshan Alam, Hedi Oueslati, Lotfi Zeghmi, Kodjo Agbossou","doi":"10.1007/s00138-024-01546-y","DOIUrl":null,"url":null,"abstract":"<p>Research on autonomous vehicles has been at a peak recently. One of the most researched aspects is the performance degradation of sensors in harsh weather conditions such as rain, snow, fog, and hail. This work addresses this performance degradation by fusing multiple sensor modalities inside the neural network used for detection. The proposed fusion method removes the pre-process fusion stage. It directly produces detection boxes from numerous images. It reduces the computation cost by providing detection and fusion simultaneously. By separating the network during the initial layers, the network can easily be modified for new sensors. Intra-network fusion improves robustness to missing inputs and applies to all compatible types of inputs while reducing the peak computing cost by using a valley-fill algorithm. Our experiments demonstrate that adopting a parallel multimodal network to fuse thermal images in the network improves object detection during difficult weather conditions such as harsh winters by up to 5% mAP while reducing dataset bias during complicated weather conditions. It also happens with around 50% fewer parameters than late-fusion approaches, which duplicate the whole network instead of the first section of the feature extractor.\n</p>","PeriodicalId":51116,"journal":{"name":"Machine Vision and Applications","volume":"4 1","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Vision and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00138-024-01546-y","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Research on autonomous vehicles has been at a peak recently. One of the most researched aspects is the performance degradation of sensors in harsh weather conditions such as rain, snow, fog, and hail. This work addresses this performance degradation by fusing multiple sensor modalities inside the neural network used for detection. The proposed fusion method removes the pre-process fusion stage. It directly produces detection boxes from numerous images. It reduces the computation cost by providing detection and fusion simultaneously. By separating the network during the initial layers, the network can easily be modified for new sensors. Intra-network fusion improves robustness to missing inputs and applies to all compatible types of inputs while reducing the peak computing cost by using a valley-fill algorithm. Our experiments demonstrate that adopting a parallel multimodal network to fuse thermal images in the network improves object detection during difficult weather conditions such as harsh winters by up to 5% mAP while reducing dataset bias during complicated weather conditions. It also happens with around 50% fewer parameters than late-fusion approaches, which duplicate the whole network instead of the first section of the feature extractor.

Abstract Image

基于树的冬季自动驾驶可见光和热传感器融合方法
最近,有关自动驾驶汽车的研究进入了高峰期。研究最多的一个方面是传感器在雨、雪、雾和冰雹等恶劣天气条件下的性能下降。这项研究通过在用于检测的神经网络中融合多种传感器模式来解决性能下降问题。所提出的融合方法取消了预处理融合阶段。它直接从众多图像中生成检测框。它通过同时提供检测和融合功能来降低计算成本。通过在初始层分离网络,网络可以很容易地针对新的传感器进行修改。网络内融合提高了对缺失输入的鲁棒性,并适用于所有兼容的输入类型,同时通过使用填谷算法降低了峰值计算成本。我们的实验证明,采用并行多模态网络将热图像融合到网络中,可在严冬等恶劣天气条件下提高物体检测率高达 5%,同时减少复杂天气条件下的数据集偏差。与复制整个网络而不是特征提取器第一部分的后期融合方法相比,这种方法的参数也减少了约 50%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Machine Vision and Applications
Machine Vision and Applications 工程技术-工程:电子与电气
CiteScore
6.30
自引率
3.00%
发文量
84
审稿时长
8.7 months
期刊介绍: Machine Vision and Applications publishes high-quality technical contributions in machine vision research and development. Specifically, the editors encourage submittals in all applications and engineering aspects of image-related computing. In particular, original contributions dealing with scientific, commercial, industrial, military, and biomedical applications of machine vision, are all within the scope of the journal. Particular emphasis is placed on engineering and technology aspects of image processing and computer vision. The following aspects of machine vision applications are of interest: algorithms, architectures, VLSI implementations, AI techniques and expert systems for machine vision, front-end sensing, multidimensional and multisensor machine vision, real-time techniques, image databases, virtual reality and visualization. Papers must include a significant experimental validation component.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信