Z. Rakotoniaina, N. E. Chelbi, D. Gingras, Frédéric Faulconnier
{"title":"live -DeepSORT:利用摄像头和激光雷达数据融合优化自动驾驶汽车多目标跟踪的DeepSORT","authors":"Z. Rakotoniaina, N. E. Chelbi, D. Gingras, Frédéric Faulconnier","doi":"10.1109/IV55152.2023.10186759","DOIUrl":null,"url":null,"abstract":"Object detection and tracking play a crucial role in the perception systems of autonomous vehicles. Simple Online Real-Time (SORT) techniques, such as DeepSORT, have proven to be among the most effective methods for multiple object tracking (MOT) in computer vision due to their ability to balance high performance with robustness in challenging scenarios. This article presents a method for adapting and optimizing the DeepSORT tracking algorithm to meet the demands of autonomous driving applications. Our approach leverages the Mask-Mean algorithm [2] to fuse data from cameras and LiDARs, as well as to detect, segment, and extract the 3D positions of objects in real-world space. In objects tracking, we take into account the ego-vehicle’s motion to estimate each object’s state, and the Unscented Kalman Filter (UKF) is utilized to handle the nonlinearity of each object’s motion state in real-world space. Our optimized version of DeepSORT, known as LIV-DeepSORT, demonstrates its ability to track multiple objects with high levels of robustness and accuracy, even in dynamic environments, making it suitable for the perception systems of autonomous vehicles.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LIV-DeepSORT: Optimized DeepSORT for Multiple Object Tracking in Autonomous Vehicles Using Camera and LiDAR Data Fusion\",\"authors\":\"Z. Rakotoniaina, N. E. Chelbi, D. Gingras, Frédéric Faulconnier\",\"doi\":\"10.1109/IV55152.2023.10186759\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Object detection and tracking play a crucial role in the perception systems of autonomous vehicles. Simple Online Real-Time (SORT) techniques, such as DeepSORT, have proven to be among the most effective methods for multiple object tracking (MOT) in computer vision due to their ability to balance high performance with robustness in challenging scenarios. This article presents a method for adapting and optimizing the DeepSORT tracking algorithm to meet the demands of autonomous driving applications. Our approach leverages the Mask-Mean algorithm [2] to fuse data from cameras and LiDARs, as well as to detect, segment, and extract the 3D positions of objects in real-world space. In objects tracking, we take into account the ego-vehicle’s motion to estimate each object’s state, and the Unscented Kalman Filter (UKF) is utilized to handle the nonlinearity of each object’s motion state in real-world space. Our optimized version of DeepSORT, known as LIV-DeepSORT, demonstrates its ability to track multiple objects with high levels of robustness and accuracy, even in dynamic environments, making it suitable for the perception systems of autonomous vehicles.\",\"PeriodicalId\":195148,\"journal\":{\"name\":\"2023 IEEE Intelligent Vehicles Symposium (IV)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Intelligent Vehicles Symposium (IV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IV55152.2023.10186759\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IV55152.2023.10186759","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LIV-DeepSORT: Optimized DeepSORT for Multiple Object Tracking in Autonomous Vehicles Using Camera and LiDAR Data Fusion
Object detection and tracking play a crucial role in the perception systems of autonomous vehicles. Simple Online Real-Time (SORT) techniques, such as DeepSORT, have proven to be among the most effective methods for multiple object tracking (MOT) in computer vision due to their ability to balance high performance with robustness in challenging scenarios. This article presents a method for adapting and optimizing the DeepSORT tracking algorithm to meet the demands of autonomous driving applications. Our approach leverages the Mask-Mean algorithm [2] to fuse data from cameras and LiDARs, as well as to detect, segment, and extract the 3D positions of objects in real-world space. In objects tracking, we take into account the ego-vehicle’s motion to estimate each object’s state, and the Unscented Kalman Filter (UKF) is utilized to handle the nonlinearity of each object’s motion state in real-world space. Our optimized version of DeepSORT, known as LIV-DeepSORT, demonstrates its ability to track multiple objects with high levels of robustness and accuracy, even in dynamic environments, making it suitable for the perception systems of autonomous vehicles.