James Cunningham, S. Miller, M. Yukish, T. Simpson, Conrad S. Tucker
{"title":"控制策略转移的深度强化学习","authors":"James Cunningham, S. Miller, M. Yukish, T. Simpson, Conrad S. Tucker","doi":"10.1115/detc2019-97689","DOIUrl":null,"url":null,"abstract":"We present a form-aware reinforcement learning (RL) method to extend control knowledge from one design form to another, without losing the ability to control the original design. A major challenge in developing control knowledge is the creation of generalized control policies across designs of varying form. Our presented RL policy is form-aware because in addition to receiving dynamic state information about the environment, it also receives states that encode information about the form of the design that is being controlled. In this paper, we investigate the impact of this mixed state space on transfer learning. We present a transfer learning method for extending a control policy to a different design form, while continuing to expose the agent to the original design during the training of the new design. To demonstrate this concept, we present a case study of a multi-rotor aircraft simulation, wherein the designated task is to achieve a stable hover. We show that by introducing form states, an RL agent is able to learn a control policy to achieve the hovering task with both a four rotor and three rotor design at once, whereas without the form states it can only hover with the four rotor design. We also benchmark our method against a test case that removes the transfer learning component, as well as a test case that removes the continued exposure to the original design to show the value of each of these components. We find that form states, transfer learning, and parallel learning all contribute to a more robust control policy for the new design, and that parallel learning is especially important for maintaining control knowledge of the original design.","PeriodicalId":365601,"journal":{"name":"Volume 2A: 45th Design Automation Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Reinforcement Learning for Transfer of Control Policies\",\"authors\":\"James Cunningham, S. Miller, M. Yukish, T. Simpson, Conrad S. Tucker\",\"doi\":\"10.1115/detc2019-97689\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a form-aware reinforcement learning (RL) method to extend control knowledge from one design form to another, without losing the ability to control the original design. A major challenge in developing control knowledge is the creation of generalized control policies across designs of varying form. Our presented RL policy is form-aware because in addition to receiving dynamic state information about the environment, it also receives states that encode information about the form of the design that is being controlled. In this paper, we investigate the impact of this mixed state space on transfer learning. We present a transfer learning method for extending a control policy to a different design form, while continuing to expose the agent to the original design during the training of the new design. To demonstrate this concept, we present a case study of a multi-rotor aircraft simulation, wherein the designated task is to achieve a stable hover. We show that by introducing form states, an RL agent is able to learn a control policy to achieve the hovering task with both a four rotor and three rotor design at once, whereas without the form states it can only hover with the four rotor design. We also benchmark our method against a test case that removes the transfer learning component, as well as a test case that removes the continued exposure to the original design to show the value of each of these components. We find that form states, transfer learning, and parallel learning all contribute to a more robust control policy for the new design, and that parallel learning is especially important for maintaining control knowledge of the original design.\",\"PeriodicalId\":365601,\"journal\":{\"name\":\"Volume 2A: 45th Design Automation Conference\",\"volume\":\"31 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Volume 2A: 45th Design Automation Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1115/detc2019-97689\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 2A: 45th Design Automation Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2019-97689","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Reinforcement Learning for Transfer of Control Policies
We present a form-aware reinforcement learning (RL) method to extend control knowledge from one design form to another, without losing the ability to control the original design. A major challenge in developing control knowledge is the creation of generalized control policies across designs of varying form. Our presented RL policy is form-aware because in addition to receiving dynamic state information about the environment, it also receives states that encode information about the form of the design that is being controlled. In this paper, we investigate the impact of this mixed state space on transfer learning. We present a transfer learning method for extending a control policy to a different design form, while continuing to expose the agent to the original design during the training of the new design. To demonstrate this concept, we present a case study of a multi-rotor aircraft simulation, wherein the designated task is to achieve a stable hover. We show that by introducing form states, an RL agent is able to learn a control policy to achieve the hovering task with both a four rotor and three rotor design at once, whereas without the form states it can only hover with the four rotor design. We also benchmark our method against a test case that removes the transfer learning component, as well as a test case that removes the continued exposure to the original design to show the value of each of these components. We find that form states, transfer learning, and parallel learning all contribute to a more robust control policy for the new design, and that parallel learning is especially important for maintaining control knowledge of the original design.