Lei Gong, Shun-Ming Wang, Yu Zhang, Yanyong Zhang, Jianmin Ji
{"title":"Lightweight Map-Enhanced 3D Object Detection and Tracking for Autonomous Driving","authors":"Lei Gong, Shun-Ming Wang, Yu Zhang, Yanyong Zhang, Jianmin Ji","doi":"10.1145/3457913.3457941","DOIUrl":null,"url":null,"abstract":"3D object detection and tracking are crucial to the real-time and accurate perception of the surrounding environment for autonomous driving. Recent approaches on 3D object detection and tracking have made great progress, thanks to the rapid development of deep learning models. Even though these models have achieved superior performance on specific datasets, the actual self-driving systems still cannot deal with real-world driving situations properly, especially in complicated scenarios like road intersections. With the development of vehicle-infrastructure cooperation technology, scene information such as map is considered to have great potential in alleviating these problems. In this paper, we explore the potential of solving corner cases in real driving scenarios through the cooperation between autonomous vehicles and map information. We propose a holistic approach that integrates and utilizes the map information in system following the tracking-by-detection paradigm. In order to ensure that the use of map information does not bring much overhead to detection and tracking, we propose a representation method for concise information extracted from rich map. We show that our framework can improve the detection and tracking accuracy with mild or no increase of latency. Specifically, in some cases, our results demonstrate a MOTA improvement of nearly 2% .","PeriodicalId":194449,"journal":{"name":"Proceedings of the 12th Asia-Pacific Symposium on Internetware","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 12th Asia-Pacific Symposium on Internetware","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3457913.3457941","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
3D object detection and tracking are crucial to the real-time and accurate perception of the surrounding environment for autonomous driving. Recent approaches on 3D object detection and tracking have made great progress, thanks to the rapid development of deep learning models. Even though these models have achieved superior performance on specific datasets, the actual self-driving systems still cannot deal with real-world driving situations properly, especially in complicated scenarios like road intersections. With the development of vehicle-infrastructure cooperation technology, scene information such as map is considered to have great potential in alleviating these problems. In this paper, we explore the potential of solving corner cases in real driving scenarios through the cooperation between autonomous vehicles and map information. We propose a holistic approach that integrates and utilizes the map information in system following the tracking-by-detection paradigm. In order to ensure that the use of map information does not bring much overhead to detection and tracking, we propose a representation method for concise information extracted from rich map. We show that our framework can improve the detection and tracking accuracy with mild or no increase of latency. Specifically, in some cases, our results demonstrate a MOTA improvement of nearly 2% .