二维空间路径规划的神经网络方法

Dmitry S. Lukin, Evgeny Yu. Kosenko
{"title":"二维空间路径规划的神经网络方法","authors":"Dmitry S. Lukin, Evgeny Yu. Kosenko","doi":"10.17212/2782-2001-2023-4-55-68","DOIUrl":null,"url":null,"abstract":"Currently, the robotization of various spheres of human life is moving at a high pace. Robots of various types and purposes are used everywhere, from storage robots moving along a given route or markers to high-tech robotic complexes that solve tasks with minimal operator participation. Robotics technology continues to evolve, and its potential for automation and solving various tasks is constantly expanding. One of the key issues of increasing the autonomy of mobile robots is the development of new and improvement of the existing approaches to controlling the movement of robots, in particular to path planning. In this paper, the task of path planning is solved using artificial neural networks and deep machine learning with reinforcement, in which the robot learns to choose actions in the environment in such a way as to maximize some numerical reward or achieve a certain goal. This approach allows you to plan the trajectory of movement by modeling the environment, the behavior of the robot, as well as the interaction between them. The reinforcement learning method provides an effective way for robots and autonomous systems to learn to adapt to diverse conditions and perform path planning tasks. In this paper, the possibility of solving the problem of planning movement to a given point using the method of approximate strategy optimization and the \"Action – Criticism\" method is investigated. The results obtained show the possibility of solving the task when learning on a relatively small number of episodes. The proposed approach can be used to control ground-based robotic systems for various purposes.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"264 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A neural network method for path planning in a two-dimensional space\",\"authors\":\"Dmitry S. Lukin, Evgeny Yu. Kosenko\",\"doi\":\"10.17212/2782-2001-2023-4-55-68\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Currently, the robotization of various spheres of human life is moving at a high pace. Robots of various types and purposes are used everywhere, from storage robots moving along a given route or markers to high-tech robotic complexes that solve tasks with minimal operator participation. Robotics technology continues to evolve, and its potential for automation and solving various tasks is constantly expanding. One of the key issues of increasing the autonomy of mobile robots is the development of new and improvement of the existing approaches to controlling the movement of robots, in particular to path planning. In this paper, the task of path planning is solved using artificial neural networks and deep machine learning with reinforcement, in which the robot learns to choose actions in the environment in such a way as to maximize some numerical reward or achieve a certain goal. This approach allows you to plan the trajectory of movement by modeling the environment, the behavior of the robot, as well as the interaction between them. The reinforcement learning method provides an effective way for robots and autonomous systems to learn to adapt to diverse conditions and perform path planning tasks. In this paper, the possibility of solving the problem of planning movement to a given point using the method of approximate strategy optimization and the \\\"Action – Criticism\\\" method is investigated. The results obtained show the possibility of solving the task when learning on a relatively small number of episodes. The proposed approach can be used to control ground-based robotic systems for various purposes.\",\"PeriodicalId\":292298,\"journal\":{\"name\":\"Analysis and data processing systems\",\"volume\":\"264 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Analysis and data processing systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.17212/2782-2001-2023-4-55-68\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Analysis and data processing systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17212/2782-2001-2023-4-55-68","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目前,人类生活各个领域的机器人化正在高速发展。各种类型和用途的机器人随处可见,从沿着指定路线或标记移动的存储机器人,到只需极少操作员参与即可完成任务的高科技机器人综合体。机器人技术不断发展,其自动化和解决各种任务的潜力也在不断扩大。提高移动机器人自主性的关键问题之一是开发新的和改进现有的机器人运动控制方法,特别是路径规划。在本文中,路径规划任务是利用人工神经网络和带强化的深度机器学习来解决的,在这种方法中,机器人学会在环境中选择行动,以最大化某些数字奖励或实现某个目标。通过这种方法,可以对环境、机器人的行为以及它们之间的相互作用进行建模,从而规划运动轨迹。强化学习法为机器人和自主系统提供了一种有效的学习方法,使其能够适应各种条件并执行路径规划任务。本文研究了使用近似策略优化法和 "行动-批评 "法解决向给定点运动规划问题的可能性。研究结果表明,在对相对较少的事件进行学习时,有可能解决这一任务。所提出的方法可用于控制各种用途的地面机器人系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A neural network method for path planning in a two-dimensional space
Currently, the robotization of various spheres of human life is moving at a high pace. Robots of various types and purposes are used everywhere, from storage robots moving along a given route or markers to high-tech robotic complexes that solve tasks with minimal operator participation. Robotics technology continues to evolve, and its potential for automation and solving various tasks is constantly expanding. One of the key issues of increasing the autonomy of mobile robots is the development of new and improvement of the existing approaches to controlling the movement of robots, in particular to path planning. In this paper, the task of path planning is solved using artificial neural networks and deep machine learning with reinforcement, in which the robot learns to choose actions in the environment in such a way as to maximize some numerical reward or achieve a certain goal. This approach allows you to plan the trajectory of movement by modeling the environment, the behavior of the robot, as well as the interaction between them. The reinforcement learning method provides an effective way for robots and autonomous systems to learn to adapt to diverse conditions and perform path planning tasks. In this paper, the possibility of solving the problem of planning movement to a given point using the method of approximate strategy optimization and the "Action – Criticism" method is investigated. The results obtained show the possibility of solving the task when learning on a relatively small number of episodes. The proposed approach can be used to control ground-based robotic systems for various purposes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信