基于强化学习的履带式机器人爬阶鳍运动获取

Ryosuke Eto, J. Yamakawa
{"title":"基于强化学习的履带式机器人爬阶鳍运动获取","authors":"Ryosuke Eto, J. Yamakawa","doi":"10.56884/sgwj1011","DOIUrl":null,"url":null,"abstract":"Remotely piloted robots have been expected for disaster operations to prevent secondary disasters to rescuers. The robots are required to have a high performance to overcome obstacles such as debris and bumps. Tracked robots with flipper arms on the front, back, left, and right sides can improve their ability to overcome bumps by changing the flipper angles, and are thus expected to be used as rescue robots. However, the six degrees of freedom of the left and right crawlers and four flipper arms require a high level of skill to maneuver the robot. Therefore, a semi-autonomous control system that automatically controls the flipper arms according to the terrain is expected. In this study, we proposed a method to determine the front and rear flipper angles in step-climbing using reinforcement learning with Double Deep Q Network. The input data were the step height, distance to the step, and current front and rear flipper angles. The outputs were amounts of front and rear flipper angle variations. The behavior of the robot and rewards were calculated using a quasi-static model that considers the slips between the step, floor, and crawler. Positive rewards were given for successful step stepping over steps and negative rewards for unsuccessful steps. Furthermore, the flipper motions at which slippage decreased were obtained by subtracting the sum of the squared values of the slippage rates from the rewards. As the results, it was confirmed that the robot slips less when the body is lifted by the rear flipper than when it runs over along the step shape.","PeriodicalId":447600,"journal":{"name":"Proceedings of the 11th Asia-Pacific Regional Conference of the ISTVS","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Acquisition of Flipper Motion in Step-Climbing of Tracked Robot Using Reinforcement Learning\",\"authors\":\"Ryosuke Eto, J. Yamakawa\",\"doi\":\"10.56884/sgwj1011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Remotely piloted robots have been expected for disaster operations to prevent secondary disasters to rescuers. The robots are required to have a high performance to overcome obstacles such as debris and bumps. Tracked robots with flipper arms on the front, back, left, and right sides can improve their ability to overcome bumps by changing the flipper angles, and are thus expected to be used as rescue robots. However, the six degrees of freedom of the left and right crawlers and four flipper arms require a high level of skill to maneuver the robot. Therefore, a semi-autonomous control system that automatically controls the flipper arms according to the terrain is expected. In this study, we proposed a method to determine the front and rear flipper angles in step-climbing using reinforcement learning with Double Deep Q Network. The input data were the step height, distance to the step, and current front and rear flipper angles. The outputs were amounts of front and rear flipper angle variations. The behavior of the robot and rewards were calculated using a quasi-static model that considers the slips between the step, floor, and crawler. Positive rewards were given for successful step stepping over steps and negative rewards for unsuccessful steps. Furthermore, the flipper motions at which slippage decreased were obtained by subtracting the sum of the squared values of the slippage rates from the rewards. As the results, it was confirmed that the robot slips less when the body is lifted by the rear flipper than when it runs over along the step shape.\",\"PeriodicalId\":447600,\"journal\":{\"name\":\"Proceedings of the 11th Asia-Pacific Regional Conference of the ISTVS\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 11th Asia-Pacific Regional Conference of the ISTVS\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.56884/sgwj1011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th Asia-Pacific Regional Conference of the ISTVS","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.56884/sgwj1011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

遥控机器人有望用于救灾行动,以防止救援人员遭受次生灾害。机器人被要求具有克服碎片和颠簸等障碍物的高性能。前肢、后肢、左右肢的履带式机器人可以通过改变脚蹼的角度来提高克服颠簸的能力,因此有望用于救援机器人。然而,左右爬行器的六个自由度和四个鳍臂需要高水平的技能来操纵机器人。因此,预计将开发一种根据地形自动控制鳍臂的半自主控制系统。在这项研究中,我们提出了一种使用双深度Q网络强化学习来确定爬坡时前后鳍角的方法。输入的数据是台阶高度,到台阶的距离,以及当前的前后鳍角。输出是前后鳍角变化的量。采用准静态模型计算机器人的行为和奖励,该模型考虑台阶、地板和履带之间的滑动。成功的步骤被给予积极的奖励,不成功的步骤被给予消极的奖励。此外,通过从奖励中减去滑动率的平方值的总和,获得滑动减少的翻转运动。结果证实,当机器人的身体被后鳍提起时,它的滑动比它沿着台阶形状运行时要少。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Acquisition of Flipper Motion in Step-Climbing of Tracked Robot Using Reinforcement Learning
Remotely piloted robots have been expected for disaster operations to prevent secondary disasters to rescuers. The robots are required to have a high performance to overcome obstacles such as debris and bumps. Tracked robots with flipper arms on the front, back, left, and right sides can improve their ability to overcome bumps by changing the flipper angles, and are thus expected to be used as rescue robots. However, the six degrees of freedom of the left and right crawlers and four flipper arms require a high level of skill to maneuver the robot. Therefore, a semi-autonomous control system that automatically controls the flipper arms according to the terrain is expected. In this study, we proposed a method to determine the front and rear flipper angles in step-climbing using reinforcement learning with Double Deep Q Network. The input data were the step height, distance to the step, and current front and rear flipper angles. The outputs were amounts of front and rear flipper angle variations. The behavior of the robot and rewards were calculated using a quasi-static model that considers the slips between the step, floor, and crawler. Positive rewards were given for successful step stepping over steps and negative rewards for unsuccessful steps. Furthermore, the flipper motions at which slippage decreased were obtained by subtracting the sum of the squared values of the slippage rates from the rewards. As the results, it was confirmed that the robot slips less when the body is lifted by the rear flipper than when it runs over along the step shape.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信