{"title":"基于横向连接神经网络的机器人抓取训练的迁移强化学习","authors":"Wenxiao Wang, Xiaojuan Wang, Renqiang Li, Haosheng Jiang, Ding Liu, X. Ping","doi":"10.1109/DDCLS58216.2023.10166333","DOIUrl":null,"url":null,"abstract":"Reinforcement learning, as an effective framework for solving continuous decision tasks in machine learning, has been widely used in manipulator decision control. However, for manipulator grasping tasks in complex environments, it is difficult for intelligence to improve performance by exploring to obtain high-quality interaction samples. In addition, the training models of reinforcement learning usually lack task generalization and need to be relearned to adapt to task changes. To address these issues, researchers have proposed transfer learning that uses external prior knowledge to help the target task to improve the reinforcement learning process. In this paper, the transfer of the manipulator grasping source task to the grasping target task based on the deep Q-network algorithm is achieved by constructing lateral connections between fully convolutional neural networks using Densenet. Experimental results in the CoppeliaSim simulation environment show that the methods successfully achieve inter-task transfer by constructing lateral connections between fully convolutional neural networks. The validated transfer reinforcement learning approach improves the effectiveness of task training while reducing the complexity of the network due to lateral connections.","PeriodicalId":415532,"journal":{"name":"2023 IEEE 12th Data Driven Control and Learning Systems Conference (DDCLS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transfer Reinforcement Learning of Robotic Grasping Training using Neural Networks with Lateral Connections\",\"authors\":\"Wenxiao Wang, Xiaojuan Wang, Renqiang Li, Haosheng Jiang, Ding Liu, X. Ping\",\"doi\":\"10.1109/DDCLS58216.2023.10166333\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement learning, as an effective framework for solving continuous decision tasks in machine learning, has been widely used in manipulator decision control. However, for manipulator grasping tasks in complex environments, it is difficult for intelligence to improve performance by exploring to obtain high-quality interaction samples. In addition, the training models of reinforcement learning usually lack task generalization and need to be relearned to adapt to task changes. To address these issues, researchers have proposed transfer learning that uses external prior knowledge to help the target task to improve the reinforcement learning process. In this paper, the transfer of the manipulator grasping source task to the grasping target task based on the deep Q-network algorithm is achieved by constructing lateral connections between fully convolutional neural networks using Densenet. Experimental results in the CoppeliaSim simulation environment show that the methods successfully achieve inter-task transfer by constructing lateral connections between fully convolutional neural networks. The validated transfer reinforcement learning approach improves the effectiveness of task training while reducing the complexity of the network due to lateral connections.\",\"PeriodicalId\":415532,\"journal\":{\"name\":\"2023 IEEE 12th Data Driven Control and Learning Systems Conference (DDCLS)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 12th Data Driven Control and Learning Systems Conference (DDCLS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DDCLS58216.2023.10166333\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 12th Data Driven Control and Learning Systems Conference (DDCLS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DDCLS58216.2023.10166333","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Transfer Reinforcement Learning of Robotic Grasping Training using Neural Networks with Lateral Connections
Reinforcement learning, as an effective framework for solving continuous decision tasks in machine learning, has been widely used in manipulator decision control. However, for manipulator grasping tasks in complex environments, it is difficult for intelligence to improve performance by exploring to obtain high-quality interaction samples. In addition, the training models of reinforcement learning usually lack task generalization and need to be relearned to adapt to task changes. To address these issues, researchers have proposed transfer learning that uses external prior knowledge to help the target task to improve the reinforcement learning process. In this paper, the transfer of the manipulator grasping source task to the grasping target task based on the deep Q-network algorithm is achieved by constructing lateral connections between fully convolutional neural networks using Densenet. Experimental results in the CoppeliaSim simulation environment show that the methods successfully achieve inter-task transfer by constructing lateral connections between fully convolutional neural networks. The validated transfer reinforcement learning approach improves the effectiveness of task training while reducing the complexity of the network due to lateral connections.