{"title":"基于交叉模态融合的非公路环境道路检测先验校正","authors":"Yuru Wang, Yi Sun, Jun Yu Li, Meiping Shi","doi":"10.1109/IROS47612.2022.9981350","DOIUrl":null,"url":null,"abstract":"Road detection plays a fundamental role in the visual navigation system of autonomous vehicles. However, it's still challenging to achieve robust road detection in off-road scenarios due to their complicated road appearances and ambiguous road structures. Therefore, existing image-based road detection approaches usually fail to extract the right routes due to the lack of the effective fusion of the image and prior reference paths(road guidances generated via map annotations and GPS localization). Besides, the reference paths are not always reliable because of GPS localization errors and mapping errors. To achieve robust road detection in off-road scenarios, we propose a prior-correction-based road detection network named PR-ROAD via fusing the cross-model information provided by both the reference path and the input image. These two heterogeneous data, prior and image, are deeply fused by a cross-attention module and formulate contextual inter-dependencies. We conduct experiments in our collected rural, off-road and urban datasets. The experimental results demonstrate the effectiveness of the proposed method both on unstructured and structured roads.","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-modal Fusion-based Prior Correction for Road Detection in Off-road Environments\",\"authors\":\"Yuru Wang, Yi Sun, Jun Yu Li, Meiping Shi\",\"doi\":\"10.1109/IROS47612.2022.9981350\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Road detection plays a fundamental role in the visual navigation system of autonomous vehicles. However, it's still challenging to achieve robust road detection in off-road scenarios due to their complicated road appearances and ambiguous road structures. Therefore, existing image-based road detection approaches usually fail to extract the right routes due to the lack of the effective fusion of the image and prior reference paths(road guidances generated via map annotations and GPS localization). Besides, the reference paths are not always reliable because of GPS localization errors and mapping errors. To achieve robust road detection in off-road scenarios, we propose a prior-correction-based road detection network named PR-ROAD via fusing the cross-model information provided by both the reference path and the input image. These two heterogeneous data, prior and image, are deeply fused by a cross-attention module and formulate contextual inter-dependencies. We conduct experiments in our collected rural, off-road and urban datasets. The experimental results demonstrate the effectiveness of the proposed method both on unstructured and structured roads.\",\"PeriodicalId\":431373,\"journal\":{\"name\":\"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS47612.2022.9981350\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS47612.2022.9981350","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cross-modal Fusion-based Prior Correction for Road Detection in Off-road Environments
Road detection plays a fundamental role in the visual navigation system of autonomous vehicles. However, it's still challenging to achieve robust road detection in off-road scenarios due to their complicated road appearances and ambiguous road structures. Therefore, existing image-based road detection approaches usually fail to extract the right routes due to the lack of the effective fusion of the image and prior reference paths(road guidances generated via map annotations and GPS localization). Besides, the reference paths are not always reliable because of GPS localization errors and mapping errors. To achieve robust road detection in off-road scenarios, we propose a prior-correction-based road detection network named PR-ROAD via fusing the cross-model information provided by both the reference path and the input image. These two heterogeneous data, prior and image, are deeply fused by a cross-attention module and formulate contextual inter-dependencies. We conduct experiments in our collected rural, off-road and urban datasets. The experimental results demonstrate the effectiveness of the proposed method both on unstructured and structured roads.