{"title":"自主水下航行器运动系统中的强化学习方法","authors":"Ting Yu , Qi Zhang , Tiejun Liu","doi":"10.1016/j.apor.2025.104682","DOIUrl":null,"url":null,"abstract":"<div><div>Autonomous underwater vehicles (AUVs) have been receiving increasing attention due to the significant role in the exploration of ocean resources. The complex underwater disturbances and highly coupled models pose significant challenges to the design of motion systems (MS) of AUVs. As a result, reinforcement learning (RL) methods that can develop robust control strategies without relying heavily on models have become a research focus of AUVs. However, the task space division for motion systems based on RL remains unclear, and systematic design approaches or summaries are lacking in this field. In this paper, we review RL-based approaches in the motion systems of AUVs in detail. Specifically, the task space of the motion systems of AUVs is classified into three categories: motion control, motion planning, and multi-AUV motion. For each task space, a targeted motion architecture is introduced along with a review of the latest advancements. In terms of motion control, auxiliary and direct motion control approaches are introduced herein. Regarding motion planning, path-oriented, state-oriented, and end-to-end approaches are discussed. For multi-AUV motion, formation motion and motion task coordination are summarized. Finally, challenges of applying reinforcement learning approaches to the motion systems of AUVs are outlined and potential future breakthroughs in this field are anticipated herein. This paper provides a detailed review of RL-based methods in motion systems of AUVs, summarizing the design architecture and network model based on RL approaches for motion systems. This will offer valuable design solutions and insights for practitioners and beginners, further promoting the application of RL methods to address complex motion system design challenges of AUVs.</div></div>","PeriodicalId":8261,"journal":{"name":"Applied Ocean Research","volume":"161 ","pages":"Article 104682"},"PeriodicalIF":4.3000,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning approaches in the motion systems of autonomous underwater vehicles\",\"authors\":\"Ting Yu , Qi Zhang , Tiejun Liu\",\"doi\":\"10.1016/j.apor.2025.104682\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Autonomous underwater vehicles (AUVs) have been receiving increasing attention due to the significant role in the exploration of ocean resources. The complex underwater disturbances and highly coupled models pose significant challenges to the design of motion systems (MS) of AUVs. As a result, reinforcement learning (RL) methods that can develop robust control strategies without relying heavily on models have become a research focus of AUVs. However, the task space division for motion systems based on RL remains unclear, and systematic design approaches or summaries are lacking in this field. In this paper, we review RL-based approaches in the motion systems of AUVs in detail. Specifically, the task space of the motion systems of AUVs is classified into three categories: motion control, motion planning, and multi-AUV motion. For each task space, a targeted motion architecture is introduced along with a review of the latest advancements. In terms of motion control, auxiliary and direct motion control approaches are introduced herein. Regarding motion planning, path-oriented, state-oriented, and end-to-end approaches are discussed. For multi-AUV motion, formation motion and motion task coordination are summarized. Finally, challenges of applying reinforcement learning approaches to the motion systems of AUVs are outlined and potential future breakthroughs in this field are anticipated herein. This paper provides a detailed review of RL-based methods in motion systems of AUVs, summarizing the design architecture and network model based on RL approaches for motion systems. This will offer valuable design solutions and insights for practitioners and beginners, further promoting the application of RL methods to address complex motion system design challenges of AUVs.</div></div>\",\"PeriodicalId\":8261,\"journal\":{\"name\":\"Applied Ocean Research\",\"volume\":\"161 \",\"pages\":\"Article 104682\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Ocean Research\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S014111872500269X\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, OCEAN\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Ocean Research","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S014111872500269X","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, OCEAN","Score":null,"Total":0}
Reinforcement learning approaches in the motion systems of autonomous underwater vehicles
Autonomous underwater vehicles (AUVs) have been receiving increasing attention due to the significant role in the exploration of ocean resources. The complex underwater disturbances and highly coupled models pose significant challenges to the design of motion systems (MS) of AUVs. As a result, reinforcement learning (RL) methods that can develop robust control strategies without relying heavily on models have become a research focus of AUVs. However, the task space division for motion systems based on RL remains unclear, and systematic design approaches or summaries are lacking in this field. In this paper, we review RL-based approaches in the motion systems of AUVs in detail. Specifically, the task space of the motion systems of AUVs is classified into three categories: motion control, motion planning, and multi-AUV motion. For each task space, a targeted motion architecture is introduced along with a review of the latest advancements. In terms of motion control, auxiliary and direct motion control approaches are introduced herein. Regarding motion planning, path-oriented, state-oriented, and end-to-end approaches are discussed. For multi-AUV motion, formation motion and motion task coordination are summarized. Finally, challenges of applying reinforcement learning approaches to the motion systems of AUVs are outlined and potential future breakthroughs in this field are anticipated herein. This paper provides a detailed review of RL-based methods in motion systems of AUVs, summarizing the design architecture and network model based on RL approaches for motion systems. This will offer valuable design solutions and insights for practitioners and beginners, further promoting the application of RL methods to address complex motion system design challenges of AUVs.
期刊介绍:
The aim of Applied Ocean Research is to encourage the submission of papers that advance the state of knowledge in a range of topics relevant to ocean engineering.