{"title":"自主地面车辆视觉导航系统仿真研究","authors":"Feiyang Wu, Danping Zou","doi":"10.1145/3586185.3586192","DOIUrl":null,"url":null,"abstract":"Navigation for autonomous ground vehicles (AGV) should be accurate and quick. Traditional navigation systems, consisting of perception, planning, and control, are unable to use noisy visual images efficiently on a power-limited computation unit. These systems also require lots of parameter-tuning work when deployed on a new robot. By contrast, end-to-end approaches, that directly map sensor information and robot state to planned trajectories, have the potential to navigate autonomous ground vehicles on edge computation devices and possess far fewer manually-tuned parameters. However, collecting data on real robots and labeling the data for training is time-consuming and costly. Therefore, many approaches turn to automatic data labeling and collection in the simulation environment. Motivated by a learning-based navigation system for drones, we present a sim-to-real learning-based navigation pipeline for AGVs where the model is solely trained in simulation environments (Gazebo and UE4) and directly deployed to a real AGV. Results show that after training, the system achieves a high success rate in both simulation and real-world cases, indicating the great potential of this learning pipeline.","PeriodicalId":383630,"journal":{"name":"Proceedings of the 2023 4th International Conference on Artificial Intelligence in Electronics Engineering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning Visual Navigation System in Simulation for Autonomous Ground Vehicles in Real World\",\"authors\":\"Feiyang Wu, Danping Zou\",\"doi\":\"10.1145/3586185.3586192\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Navigation for autonomous ground vehicles (AGV) should be accurate and quick. Traditional navigation systems, consisting of perception, planning, and control, are unable to use noisy visual images efficiently on a power-limited computation unit. These systems also require lots of parameter-tuning work when deployed on a new robot. By contrast, end-to-end approaches, that directly map sensor information and robot state to planned trajectories, have the potential to navigate autonomous ground vehicles on edge computation devices and possess far fewer manually-tuned parameters. However, collecting data on real robots and labeling the data for training is time-consuming and costly. Therefore, many approaches turn to automatic data labeling and collection in the simulation environment. Motivated by a learning-based navigation system for drones, we present a sim-to-real learning-based navigation pipeline for AGVs where the model is solely trained in simulation environments (Gazebo and UE4) and directly deployed to a real AGV. Results show that after training, the system achieves a high success rate in both simulation and real-world cases, indicating the great potential of this learning pipeline.\",\"PeriodicalId\":383630,\"journal\":{\"name\":\"Proceedings of the 2023 4th International Conference on Artificial Intelligence in Electronics Engineering\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 4th International Conference on Artificial Intelligence in Electronics Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3586185.3586192\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 4th International Conference on Artificial Intelligence in Electronics Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3586185.3586192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning Visual Navigation System in Simulation for Autonomous Ground Vehicles in Real World
Navigation for autonomous ground vehicles (AGV) should be accurate and quick. Traditional navigation systems, consisting of perception, planning, and control, are unable to use noisy visual images efficiently on a power-limited computation unit. These systems also require lots of parameter-tuning work when deployed on a new robot. By contrast, end-to-end approaches, that directly map sensor information and robot state to planned trajectories, have the potential to navigate autonomous ground vehicles on edge computation devices and possess far fewer manually-tuned parameters. However, collecting data on real robots and labeling the data for training is time-consuming and costly. Therefore, many approaches turn to automatic data labeling and collection in the simulation environment. Motivated by a learning-based navigation system for drones, we present a sim-to-real learning-based navigation pipeline for AGVs where the model is solely trained in simulation environments (Gazebo and UE4) and directly deployed to a real AGV. Results show that after training, the system achieves a high success rate in both simulation and real-world cases, indicating the great potential of this learning pipeline.