Path Planning of Mobile Robot Using Reinforcement Learning

K. G. Krishnan, Abhishek Mohan, S. Vishnu, Steve Abraham Eapen, Amith Raj, J. Jacob
{"title":"Path Planning of Mobile Robot Using Reinforcement Learning","authors":"K. G. Krishnan, Abhishek Mohan, S. Vishnu, Steve Abraham Eapen, Amith Raj, J. Jacob","doi":"10.36548/jtcsst.2022.3.004","DOIUrl":null,"url":null,"abstract":"In complex planning and control operations and tasks like manipulating objects, assisting experts in various fields, navigating outdoor environments, and exploring uncharted territory, modern robots are designed to complement or completely replace humans. Even for those skilled in robot programming, designing a control schema for such robots to carry out these tasks is typically a challenging process that necessitates starting from scratch with a new and distinct controller for each task. The designer must consider the wide range of circumstances the robot might encounter. This kind of manual programming is typically expensive and time consuming. It would be more beneficial if a robot could learn the task on its own rather than having to be preprogrammed to perform all these tasks. In this paper, a method for the path planning of a robot in a known environment is implemented using Q-Learning by finding an optimal path from a specified starting and ending point.","PeriodicalId":107574,"journal":{"name":"Journal of Trends in Computer Science and Smart Technology","volume":"123 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Trends in Computer Science and Smart Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.36548/jtcsst.2022.3.004","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In complex planning and control operations and tasks like manipulating objects, assisting experts in various fields, navigating outdoor environments, and exploring uncharted territory, modern robots are designed to complement or completely replace humans. Even for those skilled in robot programming, designing a control schema for such robots to carry out these tasks is typically a challenging process that necessitates starting from scratch with a new and distinct controller for each task. The designer must consider the wide range of circumstances the robot might encounter. This kind of manual programming is typically expensive and time consuming. It would be more beneficial if a robot could learn the task on its own rather than having to be preprogrammed to perform all these tasks. In this paper, a method for the path planning of a robot in a known environment is implemented using Q-Learning by finding an optimal path from a specified starting and ending point.
基于强化学习的移动机器人路径规划
在复杂的规划和控制操作和任务中,如操纵物体,协助各个领域的专家,在室外环境中导航,探索未知领域,现代机器人旨在补充或完全取代人类。即使对于那些精通机器人编程的人来说,为这样的机器人设计一个执行这些任务的控制方案通常也是一个具有挑战性的过程,需要从头开始,为每个任务设计一个新的、独特的控制器。设计师必须考虑机器人可能遇到的各种各样的情况。这种手工编程通常是昂贵和耗时的。如果机器人能够自己学习任务,而不是被预先编程来执行所有这些任务,那将是更有益的。本文采用Q-Learning方法,从指定的起点和终点寻找最优路径,实现了机器人在已知环境中路径规划的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信