基于逆强化学习的仿人机器人视觉平滑追踪模型的开发

Hamad Ud Din, Wasif Muhammad, N. Siddique, M. J. Irshad, Ali Asghar, M. W. Jabbar
{"title":"基于逆强化学习的仿人机器人视觉平滑追踪模型的开发","authors":"Hamad Ud Din, Wasif Muhammad, N. Siddique, M. J. Irshad, Ali Asghar, M. W. Jabbar","doi":"10.1109/ICEPECC57281.2023.10209527","DOIUrl":null,"url":null,"abstract":"This Early in the $20^{\\mathrm{t}\\mathrm{h}}$ century, research on smooth pursuit began. Nowadays, it may be found in everything from little robots to sophisticated automation projects. There are now many study studies in this area, but they are all reward-based conventionally, which is not biologically feasible. In these techniques, the robot performs an action, and the agent determines the next course of action based on the performance and a certain kind of positive or negative reward. The reward in this thesis is derived from the sensory space rather than the action space, which enables the robot to predict the reward without any prior defined reward. PC/BC-DIM, a new Deep Inverse Reinforcement Learning (DIRL) technique, is presented. Rather than relying on previously specified rewards, PC/BC-DIM assesses the prediction error between certain inputs and determines whether or not to update the weight. It was controlled independently and successfully arrived at the target place, yielding satisfying results. The iCub humanoid robot simulator is used to evaluate the performance of the suggested system.","PeriodicalId":102289,"journal":{"name":"2023 International Conference on Energy, Power, Environment, Control, and Computing (ICEPECC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Development of Visual Smooth Pursuit Model Using Inverse Reinforcement Learning For Humanoid Robots\",\"authors\":\"Hamad Ud Din, Wasif Muhammad, N. Siddique, M. J. Irshad, Ali Asghar, M. W. Jabbar\",\"doi\":\"10.1109/ICEPECC57281.2023.10209527\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This Early in the $20^{\\\\mathrm{t}\\\\mathrm{h}}$ century, research on smooth pursuit began. Nowadays, it may be found in everything from little robots to sophisticated automation projects. There are now many study studies in this area, but they are all reward-based conventionally, which is not biologically feasible. In these techniques, the robot performs an action, and the agent determines the next course of action based on the performance and a certain kind of positive or negative reward. The reward in this thesis is derived from the sensory space rather than the action space, which enables the robot to predict the reward without any prior defined reward. PC/BC-DIM, a new Deep Inverse Reinforcement Learning (DIRL) technique, is presented. Rather than relying on previously specified rewards, PC/BC-DIM assesses the prediction error between certain inputs and determines whether or not to update the weight. It was controlled independently and successfully arrived at the target place, yielding satisfying results. The iCub humanoid robot simulator is used to evaluate the performance of the suggested system.\",\"PeriodicalId\":102289,\"journal\":{\"name\":\"2023 International Conference on Energy, Power, Environment, Control, and Computing (ICEPECC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Energy, Power, Environment, Control, and Computing (ICEPECC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICEPECC57281.2023.10209527\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Energy, Power, Environment, Control, and Computing (ICEPECC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEPECC57281.2023.10209527","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

这早在20世纪初,就开始了对平滑度的研究追求。如今,从小型机器人到复杂的自动化项目,它无处不在。现在在这个领域有很多研究,但它们都是基于传统的奖励,这在生物学上是不可行的。在这些技术中,机器人执行一个动作,代理根据表现和某种积极或消极的奖励来决定下一步的行动。本文的奖励来源于感觉空间而不是动作空间,这使得机器人能够在没有任何预先定义的奖励的情况下预测奖励。提出了一种新的深度逆强化学习(DIRL)技术——PC/BC-DIM。PC/BC-DIM不依赖于先前指定的奖励,而是评估某些输入之间的预测误差,并决定是否更新权重。它被独立控制并成功到达目标地点,取得了令人满意的效果。利用iCub仿人机器人模拟器对所提出系统的性能进行了评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Development of Visual Smooth Pursuit Model Using Inverse Reinforcement Learning For Humanoid Robots
This Early in the $20^{\mathrm{t}\mathrm{h}}$ century, research on smooth pursuit began. Nowadays, it may be found in everything from little robots to sophisticated automation projects. There are now many study studies in this area, but they are all reward-based conventionally, which is not biologically feasible. In these techniques, the robot performs an action, and the agent determines the next course of action based on the performance and a certain kind of positive or negative reward. The reward in this thesis is derived from the sensory space rather than the action space, which enables the robot to predict the reward without any prior defined reward. PC/BC-DIM, a new Deep Inverse Reinforcement Learning (DIRL) technique, is presented. Rather than relying on previously specified rewards, PC/BC-DIM assesses the prediction error between certain inputs and determines whether or not to update the weight. It was controlled independently and successfully arrived at the target place, yielding satisfying results. The iCub humanoid robot simulator is used to evaluate the performance of the suggested system.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信