Ruoyu Sun, Shaochi Hu, Huijing Zhao, M. Moze, F. Aioun, F. Guillemard
{"title":"Human-like Highway Trajectory Modeling based on Inverse Reinforcement Learning","authors":"Ruoyu Sun, Shaochi Hu, Huijing Zhao, M. Moze, F. Aioun, F. Guillemard","doi":"10.1109/ITSC.2019.8916970","DOIUrl":null,"url":null,"abstract":"Autonomous driving is one of the current cutting edge technologies. For autonomous cars, their driving actions and trajectories should not only achieve autonomy and safety, but also obey human drivers’ behavior patterns, when sharing the roads with other human drivers on the highway. Traditional methods, though robust and interpretable, demands much human labor in engineering the complex mapping from current driving situation to vehicle’s future control. For newly developed deep-learning methods, though they can automatically learn such complex mapping from data and demands fewer humans’ engineering, they mostly act like black-box, and are less interpretable. We proposed a new combined method based on inverse reinforcement learning to harness the advantages of both. Experimental validations on lane-change prediction and human-like trajectory planning show that the proposed method approximates the state-of-the-art performance in modeling human trajectories, and is both interpretable and data-driven.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"25 1","pages":"1482-1489"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITSC.2019.8916970","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Autonomous driving is one of the current cutting edge technologies. For autonomous cars, their driving actions and trajectories should not only achieve autonomy and safety, but also obey human drivers’ behavior patterns, when sharing the roads with other human drivers on the highway. Traditional methods, though robust and interpretable, demands much human labor in engineering the complex mapping from current driving situation to vehicle’s future control. For newly developed deep-learning methods, though they can automatically learn such complex mapping from data and demands fewer humans’ engineering, they mostly act like black-box, and are less interpretable. We proposed a new combined method based on inverse reinforcement learning to harness the advantages of both. Experimental validations on lane-change prediction and human-like trajectory planning show that the proposed method approximates the state-of-the-art performance in modeling human trajectories, and is both interpretable and data-driven.