{"title":"探索自动驾驶汽车行为克隆的反射限制","authors":"Mohammad Nazeri, M. Bohlouli","doi":"10.1109/ICDM51629.2021.00153","DOIUrl":null,"url":null,"abstract":"To become a standard part of our daily lives, autonomous vehicles must ensure human safety. This safety comes from knowing what will happen in the future. The most common approach in state-of-the-art methods for sensorimotor driving is behavior cloning. These models struggle to anticipate what will happen in the near future to better plan their actions. Humans do so by first observing what objects are present in the environment, and by studying their type and history, they can predict how they may evolve in the near future. Based on this observation, we first demonstrate the limitation of behavior cloning in making safe and reliable decisions. Then, we propose a hierarchical approach to teach an agent how to make safer decisions based on the plausible future. The key idea is instead of hand-picking future features we integrate a high-dimensional prediction module such as predicting future RGB/semantically segmented frames into our model to allow the model to learn the required features by itself. In the end, we demonstrate qualitatively and quantitatively that this approach yields safer decisions by the agent.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Exploring Reflective Limitation of Behavior Cloning in Autonomous Vehicles\",\"authors\":\"Mohammad Nazeri, M. Bohlouli\",\"doi\":\"10.1109/ICDM51629.2021.00153\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To become a standard part of our daily lives, autonomous vehicles must ensure human safety. This safety comes from knowing what will happen in the future. The most common approach in state-of-the-art methods for sensorimotor driving is behavior cloning. These models struggle to anticipate what will happen in the near future to better plan their actions. Humans do so by first observing what objects are present in the environment, and by studying their type and history, they can predict how they may evolve in the near future. Based on this observation, we first demonstrate the limitation of behavior cloning in making safe and reliable decisions. Then, we propose a hierarchical approach to teach an agent how to make safer decisions based on the plausible future. The key idea is instead of hand-picking future features we integrate a high-dimensional prediction module such as predicting future RGB/semantically segmented frames into our model to allow the model to learn the required features by itself. In the end, we demonstrate qualitatively and quantitatively that this approach yields safer decisions by the agent.\",\"PeriodicalId\":320970,\"journal\":{\"name\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"volume\":\"77 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDM51629.2021.00153\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Data Mining (ICDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM51629.2021.00153","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exploring Reflective Limitation of Behavior Cloning in Autonomous Vehicles
To become a standard part of our daily lives, autonomous vehicles must ensure human safety. This safety comes from knowing what will happen in the future. The most common approach in state-of-the-art methods for sensorimotor driving is behavior cloning. These models struggle to anticipate what will happen in the near future to better plan their actions. Humans do so by first observing what objects are present in the environment, and by studying their type and history, they can predict how they may evolve in the near future. Based on this observation, we first demonstrate the limitation of behavior cloning in making safe and reliable decisions. Then, we propose a hierarchical approach to teach an agent how to make safer decisions based on the plausible future. The key idea is instead of hand-picking future features we integrate a high-dimensional prediction module such as predicting future RGB/semantically segmented frames into our model to allow the model to learn the required features by itself. In the end, we demonstrate qualitatively and quantitatively that this approach yields safer decisions by the agent.