Clinton Elian Gandana, J. D. K. Disu, Hongzhi Xie, Lixu Gu
{"title":"强化学习算法在机器人清洁护士教学任务中的不同未陈述目标约束分析","authors":"Clinton Elian Gandana, J. D. K. Disu, Hongzhi Xie, Lixu Gu","doi":"10.1109/IAICT50021.2020.9172009","DOIUrl":null,"url":null,"abstract":"The main objective paper is to make an empirical analysis of the effect of various unstated spatial goal constraints on reinforcement learning policy for the “reacher” task in the Robotic Scrub Nurse (RSN) application. This “reacher” task is an essential part of the RSN manipulation task, such as the task of picking, grasping, or placing the surgical instruments. This paper provides our experimental results and the evaluation of the “reacher” task under different spatial goal constraints. We researched the effect of this unstated assumption on a reinforcement learning (RL) algorithm: Soft-Actor Critic with Hindsight Experience Replay (SAC+HER). We used the 7-DoF robotic arm to evaluate this state-of-the-art deep RL algorithm. We performed our experiments in a virtual environment while training the robotic arm to reach the random target points. The implementation of this RL algorithm showed a robust performance, which is measured by reward values and success rates. We observed, these reinforcement learning assumptions, particularly the unstated spatial goal constraints, can affect the performance of the RL agent. The important aspect of the “reacher” task and the development of reinforcement learning applications in medical robotics is one of the main motivations behind this research objective.","PeriodicalId":433718,"journal":{"name":"2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Analyzing Different Unstated Goal Constraints on Reinforcement Learning Algorithm for Reacher Task in the Robotic Scrub Nurse Application\",\"authors\":\"Clinton Elian Gandana, J. D. K. Disu, Hongzhi Xie, Lixu Gu\",\"doi\":\"10.1109/IAICT50021.2020.9172009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The main objective paper is to make an empirical analysis of the effect of various unstated spatial goal constraints on reinforcement learning policy for the “reacher” task in the Robotic Scrub Nurse (RSN) application. This “reacher” task is an essential part of the RSN manipulation task, such as the task of picking, grasping, or placing the surgical instruments. This paper provides our experimental results and the evaluation of the “reacher” task under different spatial goal constraints. We researched the effect of this unstated assumption on a reinforcement learning (RL) algorithm: Soft-Actor Critic with Hindsight Experience Replay (SAC+HER). We used the 7-DoF robotic arm to evaluate this state-of-the-art deep RL algorithm. We performed our experiments in a virtual environment while training the robotic arm to reach the random target points. The implementation of this RL algorithm showed a robust performance, which is measured by reward values and success rates. We observed, these reinforcement learning assumptions, particularly the unstated spatial goal constraints, can affect the performance of the RL agent. The important aspect of the “reacher” task and the development of reinforcement learning applications in medical robotics is one of the main motivations behind this research objective.\",\"PeriodicalId\":433718,\"journal\":{\"name\":\"2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IAICT50021.2020.9172009\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAICT50021.2020.9172009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analyzing Different Unstated Goal Constraints on Reinforcement Learning Algorithm for Reacher Task in the Robotic Scrub Nurse Application
The main objective paper is to make an empirical analysis of the effect of various unstated spatial goal constraints on reinforcement learning policy for the “reacher” task in the Robotic Scrub Nurse (RSN) application. This “reacher” task is an essential part of the RSN manipulation task, such as the task of picking, grasping, or placing the surgical instruments. This paper provides our experimental results and the evaluation of the “reacher” task under different spatial goal constraints. We researched the effect of this unstated assumption on a reinforcement learning (RL) algorithm: Soft-Actor Critic with Hindsight Experience Replay (SAC+HER). We used the 7-DoF robotic arm to evaluate this state-of-the-art deep RL algorithm. We performed our experiments in a virtual environment while training the robotic arm to reach the random target points. The implementation of this RL algorithm showed a robust performance, which is measured by reward values and success rates. We observed, these reinforcement learning assumptions, particularly the unstated spatial goal constraints, can affect the performance of the RL agent. The important aspect of the “reacher” task and the development of reinforcement learning applications in medical robotics is one of the main motivations behind this research objective.