{"title":"Multi-State End-to-End Learning for Autonomous Vehicle Lateral Control","authors":"S. Mentasti, M. Bersani, M. Matteucci, F. Cheli","doi":"10.23919/AEITAUTOMOTIVE50086.2020.9307428","DOIUrl":null,"url":null,"abstract":"Lateral control is one of the primary requirements of an autonomous vehicle. This task is generally performed using complex pipelines, which include line detection trough neural network processing, vehicle state estimation, and planning. What we propose in this paper is an alternative end-to-end approach to the problem. Images acquired by a camera mounted on the vehicle are processed by two convolutional neural networks to directly retrieve the steering command. In particular, we propose an architecture built using two connected neural networks, one to predict the scenario the vehicle is facing and one, conditioned on possible situations, to predict the steering command. In our work, we also analyze the potential of a computer-generated dataset for a demanding task like end-to-end learning, where the image quality is fundamental. All the training is then performed on synthetic images, while the testing is done on real data acquired by an experimental vehicle.","PeriodicalId":104806,"journal":{"name":"2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/AEITAUTOMOTIVE50086.2020.9307428","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Lateral control is one of the primary requirements of an autonomous vehicle. This task is generally performed using complex pipelines, which include line detection trough neural network processing, vehicle state estimation, and planning. What we propose in this paper is an alternative end-to-end approach to the problem. Images acquired by a camera mounted on the vehicle are processed by two convolutional neural networks to directly retrieve the steering command. In particular, we propose an architecture built using two connected neural networks, one to predict the scenario the vehicle is facing and one, conditioned on possible situations, to predict the steering command. In our work, we also analyze the potential of a computer-generated dataset for a demanding task like end-to-end learning, where the image quality is fundamental. All the training is then performed on synthetic images, while the testing is done on real data acquired by an experimental vehicle.