Feng Xue;Yicong Chang;Wenzhuang Xu;Wenteng Liang;Fei Sheng;Anlong Ming
{"title":"Evidence-Based Real-Time Road Segmentation With RGB-D Data Augmentation","authors":"Feng Xue;Yicong Chang;Wenzhuang Xu;Wenteng Liang;Fei Sheng;Anlong Ming","doi":"10.1109/TITS.2024.3509140","DOIUrl":null,"url":null,"abstract":"Despite significant progress in RGB-D based road segmentation in recent years, the latest methods cannot achieve both state-of-the-art accuracy and real time due to the high-performance reliance on heavy structures. We argue that this reliance is due to unsuitable multimodal fusion. To be specific, RGB and depth data in road scenes are each sensitive to different regions, but current RGB-D based road segmentation methods generally combine features within sensitive regions which preserves false road representation from one of the data. Based on such findings, we design an Evidence-based Road Segmentation Method (Evi-RoadSeg), which incorporates prior knowledge of the modal-specific characteristics. Firstly, we abandon the cross-modal fusion operation commonly used in existing multimodal based methods. Instead, we collect the road evidence from RGB and depth inputs separately via two low-latency subnetworks, and fuse the road representation of the two subnetworks by taking both modalities’ evidence as a measure of confidence. Secondly, we propose an RGB-D data augmentation scheme tailored to road scenes to enhance the unique properties of RGB and depth data. It facilitates learning by adding more sensitive regions to the samples. Finally, the proposed method is evaluated on the widely used KITTI-road, ORFD, and R2D datasets. Our method achieves state-of-the-art accuracy at over 70 FPS, <inline-formula> <tex-math>$5\\times $ </tex-math></inline-formula> faster than comparable RGB-D methods. Furthermore, extensive experiments illustrate that our method can be deployed on a Jetson Nano 2GB with a speed of 8+ FPS. The code will be released in <uri>https://github.com/xuefeng-cvr/Evi-RoadSeg</uri>.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 2","pages":"1482-1493"},"PeriodicalIF":7.9000,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Transportation Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10824681/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, CIVIL","Score":null,"Total":0}
引用次数: 0
Abstract
Despite significant progress in RGB-D based road segmentation in recent years, the latest methods cannot achieve both state-of-the-art accuracy and real time due to the high-performance reliance on heavy structures. We argue that this reliance is due to unsuitable multimodal fusion. To be specific, RGB and depth data in road scenes are each sensitive to different regions, but current RGB-D based road segmentation methods generally combine features within sensitive regions which preserves false road representation from one of the data. Based on such findings, we design an Evidence-based Road Segmentation Method (Evi-RoadSeg), which incorporates prior knowledge of the modal-specific characteristics. Firstly, we abandon the cross-modal fusion operation commonly used in existing multimodal based methods. Instead, we collect the road evidence from RGB and depth inputs separately via two low-latency subnetworks, and fuse the road representation of the two subnetworks by taking both modalities’ evidence as a measure of confidence. Secondly, we propose an RGB-D data augmentation scheme tailored to road scenes to enhance the unique properties of RGB and depth data. It facilitates learning by adding more sensitive regions to the samples. Finally, the proposed method is evaluated on the widely used KITTI-road, ORFD, and R2D datasets. Our method achieves state-of-the-art accuracy at over 70 FPS, $5\times $ faster than comparable RGB-D methods. Furthermore, extensive experiments illustrate that our method can be deployed on a Jetson Nano 2GB with a speed of 8+ FPS. The code will be released in https://github.com/xuefeng-cvr/Evi-RoadSeg.
期刊介绍:
The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical activities among IEEE entities, and providing a focus for cooperative activities, both internally and externally.