Prerna Singh, Virender Singh, S. Dutta, Swagat Kumar
{"title":"Model & Feature Agnostic Eye-in-Hand Visual Servoing using Deep Reinforcement Learning with Prioritized Experience Replay","authors":"Prerna Singh, Virender Singh, S. Dutta, Swagat Kumar","doi":"10.1109/RO-MAN46459.2019.8956447","DOIUrl":null,"url":null,"abstract":"This paper presents a feature agnostic and model-free visual servoing (VS) technique using deep reinforcement learning (DRL) which exploits two new architectures of experience replay buffer in deep deterministic policy gradient (DDPG). The proposed architectures are significantly fast and converge in a few numbers of steps. We use the proposed method to learn an end-to-end VS with eye-in-hand configuration. In traditional DDPG, the experience replay memory is randomly sampled for training the actor-critic network. This results in a loss of useful experiences when the buffer contains very few successful examples. We solve this problem by proposing two new replay buffer architectures: (a) min-heap DDPG (mH-DDPG) and (b) dual replay buffer DDPG (dR-DDPG). The former uses a min-heap data structure to implement the replay buffer whereas the latter uses two buffers to separate “good” examples from the “bad” examples. The training data for the actor-critic network is created as a weighted combination of the two buffers. The proposed algorithms are validated in simulation with the UR5 robotic manipulator model. It is observed that as the number of good experiences increases in the training data, the convergence time decreases. We find 27.25% and 43.25% improvements in the rate of convergence respectively by mH-DDPG and dR-DDPG over state-of-the-art DDPG.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"53 89 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN46459.2019.8956447","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper presents a feature agnostic and model-free visual servoing (VS) technique using deep reinforcement learning (DRL) which exploits two new architectures of experience replay buffer in deep deterministic policy gradient (DDPG). The proposed architectures are significantly fast and converge in a few numbers of steps. We use the proposed method to learn an end-to-end VS with eye-in-hand configuration. In traditional DDPG, the experience replay memory is randomly sampled for training the actor-critic network. This results in a loss of useful experiences when the buffer contains very few successful examples. We solve this problem by proposing two new replay buffer architectures: (a) min-heap DDPG (mH-DDPG) and (b) dual replay buffer DDPG (dR-DDPG). The former uses a min-heap data structure to implement the replay buffer whereas the latter uses two buffers to separate “good” examples from the “bad” examples. The training data for the actor-critic network is created as a weighted combination of the two buffers. The proposed algorithms are validated in simulation with the UR5 robotic manipulator model. It is observed that as the number of good experiences increases in the training data, the convergence time decreases. We find 27.25% and 43.25% improvements in the rate of convergence respectively by mH-DDPG and dR-DDPG over state-of-the-art DDPG.