{"title":"Open-Loop Optimal Control for Tracking a Reference Signal With Approximate Dynamic Programming","authors":"Jorge A. Diaz, Lei Xu, Tohid Sardarmehni","doi":"10.1115/imece2022-96769","DOIUrl":null,"url":null,"abstract":"\n Dynamic programming (DP) provides a systematic, closed-loop solution for optimal control problems. However, it suffers from the curse of dimensionality in higher orders. Approximate dynamic programming (ADP) methods can remedy this by finding near-optimal rather than exact optimal solutions. In summary, ADP uses function approximators, such as neural networks, to approximate optimal control solutions. ADP can then converge to the near-optimal solution using techniques such as reinforcement learning (RL). The two main challenges in using this approach are finding a proper training domain and selecting a suitable neural network architecture for precisely approximating the solutions with RL. Users select the training domain and the neural networks mostly by trial and error, which is tedious and time-consuming. This paper proposes trading the closed-loop solution provided by ADP methods for more effectively selecting the domain of training. To do so, we train a neural network using a small and moving domain around the reference signal. We asses the method’s effectiveness by applying it to a widely used benchmark problem, the Van der Pol oscillator; and a real-world problem, controlling a quadrotor to track a reference trajectory. Simulation results demonstrate comparable performance to traditional methods while reducing computational requirements.","PeriodicalId":302047,"journal":{"name":"Volume 5: Dynamics, Vibration, and Control","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 5: Dynamics, Vibration, and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/imece2022-96769","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Dynamic programming (DP) provides a systematic, closed-loop solution for optimal control problems. However, it suffers from the curse of dimensionality in higher orders. Approximate dynamic programming (ADP) methods can remedy this by finding near-optimal rather than exact optimal solutions. In summary, ADP uses function approximators, such as neural networks, to approximate optimal control solutions. ADP can then converge to the near-optimal solution using techniques such as reinforcement learning (RL). The two main challenges in using this approach are finding a proper training domain and selecting a suitable neural network architecture for precisely approximating the solutions with RL. Users select the training domain and the neural networks mostly by trial and error, which is tedious and time-consuming. This paper proposes trading the closed-loop solution provided by ADP methods for more effectively selecting the domain of training. To do so, we train a neural network using a small and moving domain around the reference signal. We asses the method’s effectiveness by applying it to a widely used benchmark problem, the Van der Pol oscillator; and a real-world problem, controlling a quadrotor to track a reference trajectory. Simulation results demonstrate comparable performance to traditional methods while reducing computational requirements.