Qingchun Zheng, Shubo Li, Peihao Zhu, Wenpeng Ma, Yanlu Wang
{"title":"融合避障和记忆功能的路径规划算法","authors":"Qingchun Zheng, Shubo Li, Peihao Zhu, Wenpeng Ma, Yanlu Wang","doi":"10.1049/ccs2.12098","DOIUrl":null,"url":null,"abstract":"<p>In this study, to address the issues of sluggish convergence and poor learning efficiency at the initial stages of training, the authors improve and optimise the Deep Deterministic Policy Gradient (DDPG) algorithm. First, inspired by the Artificial Potential Field method, the selection strategy of DDPG has been improved to accelerate the convergence speed during the early stages of training and reduce the time it takes for the mobile robot to reach the target point. Then, optimising the neural network structure of the DDPG algorithm based on the Long Short-Term Memory accelerates the algorithm's convergence speed in complex dynamic scenes. Static and dynamic scene simulation experiments of mobile robots are carried out in ROS. Test findings demonstrate that the Artificial Potential Field method-Long Short Term Memory Deep Deterministic Policy Gradient (APF-LSTM DDPG) algorithm converges significantly faster in complex dynamic scenes. The success rate is improved by 7.3% and 3.6% in contrast to the DDPG and LSTM-DDPG algorithms. Finally, the usefulness of the method provided in this study is similarly demonstrated in real situations using real mobile robot platforms, laying the foundation for the path planning of mobile robots in complex changing conditions.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12098","citationCount":"0","resultStr":"{\"title\":\"A path planning algorithm fusion of obstacle avoidance and memory functions\",\"authors\":\"Qingchun Zheng, Shubo Li, Peihao Zhu, Wenpeng Ma, Yanlu Wang\",\"doi\":\"10.1049/ccs2.12098\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In this study, to address the issues of sluggish convergence and poor learning efficiency at the initial stages of training, the authors improve and optimise the Deep Deterministic Policy Gradient (DDPG) algorithm. First, inspired by the Artificial Potential Field method, the selection strategy of DDPG has been improved to accelerate the convergence speed during the early stages of training and reduce the time it takes for the mobile robot to reach the target point. Then, optimising the neural network structure of the DDPG algorithm based on the Long Short-Term Memory accelerates the algorithm's convergence speed in complex dynamic scenes. Static and dynamic scene simulation experiments of mobile robots are carried out in ROS. Test findings demonstrate that the Artificial Potential Field method-Long Short Term Memory Deep Deterministic Policy Gradient (APF-LSTM DDPG) algorithm converges significantly faster in complex dynamic scenes. The success rate is improved by 7.3% and 3.6% in contrast to the DDPG and LSTM-DDPG algorithms. Finally, the usefulness of the method provided in this study is similarly demonstrated in real situations using real mobile robot platforms, laying the foundation for the path planning of mobile robots in complex changing conditions.</p>\",\"PeriodicalId\":33652,\"journal\":{\"name\":\"Cognitive Computation and Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2023-12-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12098\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Computation and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12098\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12098","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A path planning algorithm fusion of obstacle avoidance and memory functions
In this study, to address the issues of sluggish convergence and poor learning efficiency at the initial stages of training, the authors improve and optimise the Deep Deterministic Policy Gradient (DDPG) algorithm. First, inspired by the Artificial Potential Field method, the selection strategy of DDPG has been improved to accelerate the convergence speed during the early stages of training and reduce the time it takes for the mobile robot to reach the target point. Then, optimising the neural network structure of the DDPG algorithm based on the Long Short-Term Memory accelerates the algorithm's convergence speed in complex dynamic scenes. Static and dynamic scene simulation experiments of mobile robots are carried out in ROS. Test findings demonstrate that the Artificial Potential Field method-Long Short Term Memory Deep Deterministic Policy Gradient (APF-LSTM DDPG) algorithm converges significantly faster in complex dynamic scenes. The success rate is improved by 7.3% and 3.6% in contrast to the DDPG and LSTM-DDPG algorithms. Finally, the usefulness of the method provided in this study is similarly demonstrated in real situations using real mobile robot platforms, laying the foundation for the path planning of mobile robots in complex changing conditions.