Evidential deep learning-based multi-modal environment perception for intelligent vehicles

Mihreteab Negash Geletu, Danut-Vasile Giurgi, Thomas Josso-Laurain, M. Devanne, Mengesha Mamo Wogari, Jean-Philippe Lauffenburger
{"title":"Evidential deep learning-based multi-modal environment perception for intelligent vehicles","authors":"Mihreteab Negash Geletu, Danut-Vasile Giurgi, Thomas Josso-Laurain, M. Devanne, Mengesha Mamo Wogari, Jean-Philippe Lauffenburger","doi":"10.1109/IV55152.2023.10186581","DOIUrl":null,"url":null,"abstract":"Intelligent vehicles (IVs) are pursued in both research laboratories and industries to revolutionize transportation systems. Since the driving surroundings can be cluttered and the weather conditions may vary, environment perception in IVs represents a challenging task. Therefore, multi-modal sensors are engaged. In perception, outstanding performance is obtained by employing deep learning algorithms. However, deep learning often relies on probabilities while there is a better formalism to handle prediction uncertainty. To circumvent this, in this work, evidence theory is combined with a camera-lidar-based deep learning fusion architecture. The coupling is based on generating basic belief functions using distance to prototypes. It also uses a distance-based decision rule. Because IVs have constrained computational power, a reduced deep-learning architecture is leveraged in this formulation. In the task of road detection, the evidential approach outperforms the probabilistic one. Besides, ambiguous features can be prudently settled as ignorance rather than making a possibly wrong decision using probability. The coupling is also extended to the task of semantic segmentation. This shows how evidential formulation can be easily adapted to the multi-class case. Therefore, the evidential formulation is generic and produces a more accurate and versatile prediction while maintaining the trade-off between performances and computational costs in IVs. This work uses the KITTI dataset.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IV55152.2023.10186581","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Intelligent vehicles (IVs) are pursued in both research laboratories and industries to revolutionize transportation systems. Since the driving surroundings can be cluttered and the weather conditions may vary, environment perception in IVs represents a challenging task. Therefore, multi-modal sensors are engaged. In perception, outstanding performance is obtained by employing deep learning algorithms. However, deep learning often relies on probabilities while there is a better formalism to handle prediction uncertainty. To circumvent this, in this work, evidence theory is combined with a camera-lidar-based deep learning fusion architecture. The coupling is based on generating basic belief functions using distance to prototypes. It also uses a distance-based decision rule. Because IVs have constrained computational power, a reduced deep-learning architecture is leveraged in this formulation. In the task of road detection, the evidential approach outperforms the probabilistic one. Besides, ambiguous features can be prudently settled as ignorance rather than making a possibly wrong decision using probability. The coupling is also extended to the task of semantic segmentation. This shows how evidential formulation can be easily adapted to the multi-class case. Therefore, the evidential formulation is generic and produces a more accurate and versatile prediction while maintaining the trade-off between performances and computational costs in IVs. This work uses the KITTI dataset.
基于证据深度学习的智能车辆多模态环境感知
智能车辆(IVs)在研究实验室和工业领域都在追求,以彻底改变交通系统。由于驾驶环境可能很混乱,天气条件也可能变化,因此自动驾驶汽车的环境感知是一项具有挑战性的任务。因此,需要使用多模态传感器。在感知方面,采用深度学习算法获得了出色的性能。然而,深度学习通常依赖于概率,而有更好的形式来处理预测的不确定性。为了解决这个问题,在这项工作中,证据理论与基于相机-激光雷达的深度学习融合架构相结合。耦合是基于使用到原型的距离生成基本信念函数。它还使用基于距离的决策规则。由于iv的计算能力有限,因此在此公式中利用了简化的深度学习架构。在道路检测任务中,证据方法优于概率方法。此外,模棱两可的特征可以谨慎地解决为无知,而不是使用概率做出可能错误的决定。这种耦合还扩展到语义分割任务。这表明,证据公式可以很容易地适应于多类别的情况。因此,证据公式是通用的,并产生更准确和通用的预测,同时保持性能和计算成本在IVs之间的权衡。这项工作使用了KITTI数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信