Fei-xiang Xu , Yan-chen Wang , De-qiang Cheng , Wei-guang An , Chen Zhou , Qi-qi Kou
{"title":"非结构化环境下自动专用车辆强化学习驱动的启发式路径规划方法","authors":"Fei-xiang Xu , Yan-chen Wang , De-qiang Cheng , Wei-guang An , Chen Zhou , Qi-qi Kou","doi":"10.1016/j.robot.2025.105231","DOIUrl":null,"url":null,"abstract":"<div><div>Aiming at improving the adaptability of global path planning method for the Automated Special Vehicles (ASVs) in a variety of unstructured environments, a reinforcement learning (RL)-driven heuristic path planning method is proposed. The introduction of traditional heuristic algorithm avoids inefficiency of RL in the early learning phase, and it provides a preliminary planning path to be adjusted by RL. Furthermore, a reward function is designed based on vehicle dynamics to generate a smooth, stable, and efficient path. The simulation environments are established based on real terrain data. The algorithm's performance is evaluated by testing various starting and ending points across different terrains. This paper also examines how obstacle distributions and ground conditions affect ASV path planning. Results demonstrate that the proposed method generates collision-free, efficient paths while maintaining excellent adaptability to diverse complex terrains.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"195 ","pages":"Article 105231"},"PeriodicalIF":5.2000,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning-driven heuristic path planning method for automated special vehicles in unstructured environment\",\"authors\":\"Fei-xiang Xu , Yan-chen Wang , De-qiang Cheng , Wei-guang An , Chen Zhou , Qi-qi Kou\",\"doi\":\"10.1016/j.robot.2025.105231\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Aiming at improving the adaptability of global path planning method for the Automated Special Vehicles (ASVs) in a variety of unstructured environments, a reinforcement learning (RL)-driven heuristic path planning method is proposed. The introduction of traditional heuristic algorithm avoids inefficiency of RL in the early learning phase, and it provides a preliminary planning path to be adjusted by RL. Furthermore, a reward function is designed based on vehicle dynamics to generate a smooth, stable, and efficient path. The simulation environments are established based on real terrain data. The algorithm's performance is evaluated by testing various starting and ending points across different terrains. This paper also examines how obstacle distributions and ground conditions affect ASV path planning. Results demonstrate that the proposed method generates collision-free, efficient paths while maintaining excellent adaptability to diverse complex terrains.</div></div>\",\"PeriodicalId\":49592,\"journal\":{\"name\":\"Robotics and Autonomous Systems\",\"volume\":\"195 \",\"pages\":\"Article 105231\"},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2025-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Robotics and Autonomous Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0921889025003288\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0921889025003288","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Reinforcement learning-driven heuristic path planning method for automated special vehicles in unstructured environment
Aiming at improving the adaptability of global path planning method for the Automated Special Vehicles (ASVs) in a variety of unstructured environments, a reinforcement learning (RL)-driven heuristic path planning method is proposed. The introduction of traditional heuristic algorithm avoids inefficiency of RL in the early learning phase, and it provides a preliminary planning path to be adjusted by RL. Furthermore, a reward function is designed based on vehicle dynamics to generate a smooth, stable, and efficient path. The simulation environments are established based on real terrain data. The algorithm's performance is evaluated by testing various starting and ending points across different terrains. This paper also examines how obstacle distributions and ground conditions affect ASV path planning. Results demonstrate that the proposed method generates collision-free, efficient paths while maintaining excellent adaptability to diverse complex terrains.
期刊介绍:
Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.
Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.