Reinforcement learning approaches in the motion systems of autonomous underwater vehicles

IF 4.3 2区 工程技术 Q1 ENGINEERING, OCEAN
Ting Yu , Qi Zhang , Tiejun Liu
{"title":"Reinforcement learning approaches in the motion systems of autonomous underwater vehicles","authors":"Ting Yu ,&nbsp;Qi Zhang ,&nbsp;Tiejun Liu","doi":"10.1016/j.apor.2025.104682","DOIUrl":null,"url":null,"abstract":"<div><div>Autonomous underwater vehicles (AUVs) have been receiving increasing attention due to the significant role in the exploration of ocean resources. The complex underwater disturbances and highly coupled models pose significant challenges to the design of motion systems (MS) of AUVs. As a result, reinforcement learning (RL) methods that can develop robust control strategies without relying heavily on models have become a research focus of AUVs. However, the task space division for motion systems based on RL remains unclear, and systematic design approaches or summaries are lacking in this field. In this paper, we review RL-based approaches in the motion systems of AUVs in detail. Specifically, the task space of the motion systems of AUVs is classified into three categories: motion control, motion planning, and multi-AUV motion. For each task space, a targeted motion architecture is introduced along with a review of the latest advancements. In terms of motion control, auxiliary and direct motion control approaches are introduced herein. Regarding motion planning, path-oriented, state-oriented, and end-to-end approaches are discussed. For multi-AUV motion, formation motion and motion task coordination are summarized. Finally, challenges of applying reinforcement learning approaches to the motion systems of AUVs are outlined and potential future breakthroughs in this field are anticipated herein. This paper provides a detailed review of RL-based methods in motion systems of AUVs, summarizing the design architecture and network model based on RL approaches for motion systems. This will offer valuable design solutions and insights for practitioners and beginners, further promoting the application of RL methods to address complex motion system design challenges of AUVs.</div></div>","PeriodicalId":8261,"journal":{"name":"Applied Ocean Research","volume":"161 ","pages":"Article 104682"},"PeriodicalIF":4.3000,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Ocean Research","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S014111872500269X","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, OCEAN","Score":null,"Total":0}
引用次数: 0

Abstract

Autonomous underwater vehicles (AUVs) have been receiving increasing attention due to the significant role in the exploration of ocean resources. The complex underwater disturbances and highly coupled models pose significant challenges to the design of motion systems (MS) of AUVs. As a result, reinforcement learning (RL) methods that can develop robust control strategies without relying heavily on models have become a research focus of AUVs. However, the task space division for motion systems based on RL remains unclear, and systematic design approaches or summaries are lacking in this field. In this paper, we review RL-based approaches in the motion systems of AUVs in detail. Specifically, the task space of the motion systems of AUVs is classified into three categories: motion control, motion planning, and multi-AUV motion. For each task space, a targeted motion architecture is introduced along with a review of the latest advancements. In terms of motion control, auxiliary and direct motion control approaches are introduced herein. Regarding motion planning, path-oriented, state-oriented, and end-to-end approaches are discussed. For multi-AUV motion, formation motion and motion task coordination are summarized. Finally, challenges of applying reinforcement learning approaches to the motion systems of AUVs are outlined and potential future breakthroughs in this field are anticipated herein. This paper provides a detailed review of RL-based methods in motion systems of AUVs, summarizing the design architecture and network model based on RL approaches for motion systems. This will offer valuable design solutions and insights for practitioners and beginners, further promoting the application of RL methods to address complex motion system design challenges of AUVs.
自主水下航行器运动系统中的强化学习方法
自主水下航行器(auv)由于在海洋资源勘探中的重要作用而受到越来越多的关注。复杂的水下扰动和高耦合模型对水下机器人运动系统的设计提出了重大挑战。因此,能够在不严重依赖模型的情况下制定鲁棒控制策略的强化学习(RL)方法已成为auv的研究热点。然而,基于强化学习的运动系统任务空间划分尚不明确,缺乏系统的设计方法或总结。在本文中,我们详细回顾了基于rl的方法在auv运动系统中的应用。具体来说,水下机器人运动系统的任务空间分为三类:运动控制、运动规划和多水下机器人运动。对于每个任务空间,介绍了目标运动架构以及对最新进展的回顾。在运动控制方面,本文介绍了辅助运动控制和直接运动控制方法。关于运动规划,讨论了面向路径、面向状态和端到端方法。对于多auv运动,综述了编队运动和运动任务协调。最后,概述了将强化学习方法应用于auv运动系统的挑战,并展望了该领域未来的潜在突破。本文对水下机器人运动系统中基于强化学习的方法进行了详细的综述,总结了基于强化学习方法的运动系统设计体系结构和网络模型。这将为从业者和初学者提供有价值的设计解决方案和见解,进一步促进RL方法在解决auv复杂运动系统设计挑战中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Applied Ocean Research
Applied Ocean Research 地学-工程:大洋
CiteScore
8.70
自引率
7.00%
发文量
316
审稿时长
59 days
期刊介绍: The aim of Applied Ocean Research is to encourage the submission of papers that advance the state of knowledge in a range of topics relevant to ocean engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信