{"title":"Baidu driving dataset and end-to-end reactive control model","authors":"Hao Yu, Shu Yang, Weihao Gu, Shaoyu Zhang","doi":"10.1109/IVS.2017.7995742","DOIUrl":null,"url":null,"abstract":"End-to-end autonomous driving system has obtained great progress recently. In this paper, we will introduce our open source dataset: Baidu Driving Dataset(BDD), and our end-to-end reactive control model trained on BDD. The BDD comes from Baidu street view project, which generates millions of kilometers driving data every year. Among them, we publish 10000 kilometers driving data for end-to-end autonomous driving research. The BDD consists of two parts: forward images and vehicle motion attitude. The vehicle motion attitude is derived from real time kinematic GPS location data with standard deviation of 3 centimeters. Our reactive control model consists of lateral control and longitudinal control. We employ curvature instead of steering angle for lateral control, and leverage acceleration, not throttle or brake, for longitudinal control. CNN network is employed for lateral control model, mapping a single image from forward camera directly to corresponding curvature. For longitudinal control, stacked convolutional LSTM is used to extract spatial and temporal features from a sequence of frames, and to map the features with longitudinal control commands. The demo and data are in http://roadhackers.baidu.com. To the best of our knowledge, it is the first time that both lateral and longitudinal control are implemented in an end-to-end style.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVS.2017.7995742","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 31
Abstract
End-to-end autonomous driving system has obtained great progress recently. In this paper, we will introduce our open source dataset: Baidu Driving Dataset(BDD), and our end-to-end reactive control model trained on BDD. The BDD comes from Baidu street view project, which generates millions of kilometers driving data every year. Among them, we publish 10000 kilometers driving data for end-to-end autonomous driving research. The BDD consists of two parts: forward images and vehicle motion attitude. The vehicle motion attitude is derived from real time kinematic GPS location data with standard deviation of 3 centimeters. Our reactive control model consists of lateral control and longitudinal control. We employ curvature instead of steering angle for lateral control, and leverage acceleration, not throttle or brake, for longitudinal control. CNN network is employed for lateral control model, mapping a single image from forward camera directly to corresponding curvature. For longitudinal control, stacked convolutional LSTM is used to extract spatial and temporal features from a sequence of frames, and to map the features with longitudinal control commands. The demo and data are in http://roadhackers.baidu.com. To the best of our knowledge, it is the first time that both lateral and longitudinal control are implemented in an end-to-end style.