{"title":"Reinforcement Learning based Underwater Structural Pole Inspection","authors":"Chee Sheng Tan, R. Mohd-Mokhtar, M. Arshad","doi":"10.1109/USYS56283.2022.10072827","DOIUrl":null,"url":null,"abstract":"The most challenging problem in inspection planning is the structural coverage in an environment with obstacles. This paper presents a coverage path planning framework based on reinforcement learning using an autonomous underwater vehicle (AUV). This approach exploits the knowledge from the model and generates an optimal path to move from the initial position to the nearest area of interest (AOI). Then, it starts to perform a sweep of the exterior boundary of a three-dimensional (3D) structure in the workspace, including concerning the complete coverage of the given AOI and avoiding obstacles. In this model, a non-linear action selection strategy is used to provide a meaningful exploration, contributing to more stability in the learning process. A reward function is designed by taking into consideration multiple objectives to satisfy the sub-goal requirements. The simulation result indicates the effectiveness of the approach in planning the inspection path. The AUV behaves as a boustrophedon motion when covering the AOI and can achieve maximum cumulative reward while reaching the learning goal.","PeriodicalId":434350,"journal":{"name":"2022 IEEE 9th International Conference on Underwater System Technology: Theory and Applications (USYS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 9th International Conference on Underwater System Technology: Theory and Applications (USYS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/USYS56283.2022.10072827","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The most challenging problem in inspection planning is the structural coverage in an environment with obstacles. This paper presents a coverage path planning framework based on reinforcement learning using an autonomous underwater vehicle (AUV). This approach exploits the knowledge from the model and generates an optimal path to move from the initial position to the nearest area of interest (AOI). Then, it starts to perform a sweep of the exterior boundary of a three-dimensional (3D) structure in the workspace, including concerning the complete coverage of the given AOI and avoiding obstacles. In this model, a non-linear action selection strategy is used to provide a meaningful exploration, contributing to more stability in the learning process. A reward function is designed by taking into consideration multiple objectives to satisfy the sub-goal requirements. The simulation result indicates the effectiveness of the approach in planning the inspection path. The AUV behaves as a boustrophedon motion when covering the AOI and can achieve maximum cumulative reward while reaching the learning goal.