Model-based deep reinforcement learning for data-driven motion control of an under-actuated unmanned surface vehicle: Path following and trajectory tracking
Zhouhua Peng , Enrong Liu , Chao Pan , Haoliang Wang , Dan Wang , Lu Liu
{"title":"Model-based deep reinforcement learning for data-driven motion control of an under-actuated unmanned surface vehicle: Path following and trajectory tracking","authors":"Zhouhua Peng , Enrong Liu , Chao Pan , Haoliang Wang , Dan Wang , Lu Liu","doi":"10.1016/j.jfranklin.2022.10.020","DOIUrl":null,"url":null,"abstract":"<div><p><span><span>Unmanned surface vehicles (USVs) are a promising marine robotic platform<span><span> for numerous potential applications in ocean space due to their small size, low cost, and high autonomy. Modelling and control of USVs is a challenging task due to their intrinsic nonlinearities, strong couplings, high uncertainty, under-actuation, and multiple constraints. Well designed motion controllers may not be effective when exposed in the complex and dynamic sea environment. The paper presents a fully data-driven learning-based motion </span>control method for an USV based on model-based </span></span>deep reinforcement learning<span><span>. Specifically, we first train a data-driven prediction model based on a deep network for the USV by using recorded input and output data. Based on the learned prediction model, model predictive motion controllers are presented for achieving trajectory tracking and path following tasks. It is shown that after learning with random data collected from the USV, the proposed data-driven motion controller is able to follow trajectories or parameterized paths accurately with excellent </span>sample efficiency. Simulation results are given to illustrate the proposed deep reinforcement learning scheme for fully data-driven motion control without any </span></span><em>a priori</em> model information of the USV.</p></div>","PeriodicalId":17283,"journal":{"name":"Journal of The Franklin Institute-engineering and Applied Mathematics","volume":"360 6","pages":"Pages 4399-4426"},"PeriodicalIF":4.2000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of The Franklin Institute-engineering and Applied Mathematics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0016003222007463","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 2
Abstract
Unmanned surface vehicles (USVs) are a promising marine robotic platform for numerous potential applications in ocean space due to their small size, low cost, and high autonomy. Modelling and control of USVs is a challenging task due to their intrinsic nonlinearities, strong couplings, high uncertainty, under-actuation, and multiple constraints. Well designed motion controllers may not be effective when exposed in the complex and dynamic sea environment. The paper presents a fully data-driven learning-based motion control method for an USV based on model-based deep reinforcement learning. Specifically, we first train a data-driven prediction model based on a deep network for the USV by using recorded input and output data. Based on the learned prediction model, model predictive motion controllers are presented for achieving trajectory tracking and path following tasks. It is shown that after learning with random data collected from the USV, the proposed data-driven motion controller is able to follow trajectories or parameterized paths accurately with excellent sample efficiency. Simulation results are given to illustrate the proposed deep reinforcement learning scheme for fully data-driven motion control without any a priori model information of the USV.
期刊介绍:
The Journal of The Franklin Institute has an established reputation for publishing high-quality papers in the field of engineering and applied mathematics. Its current focus is on control systems, complex networks and dynamic systems, signal processing and communications and their applications. All submitted papers are peer-reviewed. The Journal will publish original research papers and research review papers of substance. Papers and special focus issues are judged upon possible lasting value, which has been and continues to be the strength of the Journal of The Franklin Institute.