Zhitao Yu;Jian Zhang;Shiwen Mao;Senthilkumar C. G. Periaswamy;Justin Patton
{"title":"基于rfid的机器人搜索与规划任务的多状态空间推理强化学习","authors":"Zhitao Yu;Jian Zhang;Shiwen Mao;Senthilkumar C. G. Periaswamy;Justin Patton","doi":"10.23919/JCIN.2022.9906938","DOIUrl":null,"url":null,"abstract":"In recent years, reinforcement learning (RL) has shown high potential for robotic applications. However, RL heavily relies on the reward function, and the agent merely follows the policy to maximize rewards but lacks reasoning ability. As a result, RL may not be suitable for long-horizon robotic tasks. In this paper, we propose a novel learning framework, called multiple state spaces reasoning reinforcement learning (SRRL), to endow the agent with the primary reasoning capability. First, we abstract the implicit and latent links between multiple state spaces. Then, we embed historical observations through a long short-term memory (LSTM) network to preserve long-term memories and dependencies. The proposed SRRL's ability of abstraction and long-term memory enables agents to execute long-horizon robotic searching and planning tasks more quickly and reasonably by exploiting the correlation between radio frequency identification (RFID) sensing properties and the environment occupation map. We experimentally validate the efficacy of SRRL in a visual game-based simulation environment. Our methodology outperforms three state-of-the-art baseline schemes by significant margins.","PeriodicalId":100766,"journal":{"name":"Journal of Communications and Information Networks","volume":"7 3","pages":"239-251"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-State-Space Reasoning Reinforcement Learning for Long-Horizon RFID-Based Robotic Searching and Planning Tasks\",\"authors\":\"Zhitao Yu;Jian Zhang;Shiwen Mao;Senthilkumar C. G. Periaswamy;Justin Patton\",\"doi\":\"10.23919/JCIN.2022.9906938\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, reinforcement learning (RL) has shown high potential for robotic applications. However, RL heavily relies on the reward function, and the agent merely follows the policy to maximize rewards but lacks reasoning ability. As a result, RL may not be suitable for long-horizon robotic tasks. In this paper, we propose a novel learning framework, called multiple state spaces reasoning reinforcement learning (SRRL), to endow the agent with the primary reasoning capability. First, we abstract the implicit and latent links between multiple state spaces. Then, we embed historical observations through a long short-term memory (LSTM) network to preserve long-term memories and dependencies. The proposed SRRL's ability of abstraction and long-term memory enables agents to execute long-horizon robotic searching and planning tasks more quickly and reasonably by exploiting the correlation between radio frequency identification (RFID) sensing properties and the environment occupation map. We experimentally validate the efficacy of SRRL in a visual game-based simulation environment. Our methodology outperforms three state-of-the-art baseline schemes by significant margins.\",\"PeriodicalId\":100766,\"journal\":{\"name\":\"Journal of Communications and Information Networks\",\"volume\":\"7 3\",\"pages\":\"239-251\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Communications and Information Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/9906938/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Communications and Information Networks","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/9906938/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-State-Space Reasoning Reinforcement Learning for Long-Horizon RFID-Based Robotic Searching and Planning Tasks
In recent years, reinforcement learning (RL) has shown high potential for robotic applications. However, RL heavily relies on the reward function, and the agent merely follows the policy to maximize rewards but lacks reasoning ability. As a result, RL may not be suitable for long-horizon robotic tasks. In this paper, we propose a novel learning framework, called multiple state spaces reasoning reinforcement learning (SRRL), to endow the agent with the primary reasoning capability. First, we abstract the implicit and latent links between multiple state spaces. Then, we embed historical observations through a long short-term memory (LSTM) network to preserve long-term memories and dependencies. The proposed SRRL's ability of abstraction and long-term memory enables agents to execute long-horizon robotic searching and planning tasks more quickly and reasonably by exploiting the correlation between radio frequency identification (RFID) sensing properties and the environment occupation map. We experimentally validate the efficacy of SRRL in a visual game-based simulation environment. Our methodology outperforms three state-of-the-art baseline schemes by significant margins.