{"title":"Data Driven Control of Interacting Two Tank Hybrid System using Deep Reinforcement Learning","authors":"David Mathew Jones, S. Kanagalakshmi","doi":"10.1109/ICCCA52192.2021.9666405","DOIUrl":null,"url":null,"abstract":"This paper investigates the use of a Deep Neural Network based Reinforcement Learning(RL) algorithm applied to a non-linear system for the design of a controller. It aims to augment the large amounts of data that we possess along with the already known dynamics of the non-linear hybrid tank system for effective control of the liquid level. Control systems represent a non-linear optimization problem, and Machine Learning helps to achieve non-linear optimization using large amounts of data. This document demonstrates the use of Deep Deterministic Policy Gradient (DDPG), an off-policy based actor-critic methodology of reinforcement learning, which is efficient in solving problems where states and actions lie in continuous spaces instead of discrete spaces. The test bench on which RL is being applied is a Multi-Input Multi-Output (MIMO) system called the Interacting Two Tank Hybrid System, with the aim of controlling the liquid levels in the two tanks. In Deep Reinforcement Learning, we are implementing the policy of the agent by means of deep neural networks. The idea behind using the neural network architectures for reinforcement learning is that we want reward signals obtained to strengthen the connection that leads to a good policy. Moreover, these deep neural networks are unique in their ability to represent complex functions if we give them ample amounts of data.","PeriodicalId":399605,"journal":{"name":"2021 IEEE 6th International Conference on Computing, Communication and Automation (ICCCA)","volume":"171 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 6th International Conference on Computing, Communication and Automation (ICCCA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCA52192.2021.9666405","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper investigates the use of a Deep Neural Network based Reinforcement Learning(RL) algorithm applied to a non-linear system for the design of a controller. It aims to augment the large amounts of data that we possess along with the already known dynamics of the non-linear hybrid tank system for effective control of the liquid level. Control systems represent a non-linear optimization problem, and Machine Learning helps to achieve non-linear optimization using large amounts of data. This document demonstrates the use of Deep Deterministic Policy Gradient (DDPG), an off-policy based actor-critic methodology of reinforcement learning, which is efficient in solving problems where states and actions lie in continuous spaces instead of discrete spaces. The test bench on which RL is being applied is a Multi-Input Multi-Output (MIMO) system called the Interacting Two Tank Hybrid System, with the aim of controlling the liquid levels in the two tanks. In Deep Reinforcement Learning, we are implementing the policy of the agent by means of deep neural networks. The idea behind using the neural network architectures for reinforcement learning is that we want reward signals obtained to strengthen the connection that leads to a good policy. Moreover, these deep neural networks are unique in their ability to represent complex functions if we give them ample amounts of data.