{"title":"A Quantitative Evaluation of Bathymetry-Based Bayesian Localization Methods for Autonomous Underwater Robots","authors":"Jungseok Hong;Michael Fulton;Kevin Orpen;Kimberly Barthelemy;Keara Berlin;Junaed Sattar","doi":"10.1109/JOE.2025.3535598","DOIUrl":"https://doi.org/10.1109/JOE.2025.3535598","url":null,"abstract":"This article presents an evaluation of four probabilistic algorithms for bathymetry-based localization of autonomous underwater vehicles (AUVs). The algorithms fuse a priori bathymetry information with depth and range measurements to localize an AUV underwater using four different Bayes filters [extended Kalman filter, unscented Kalman filter, particle filter, and marginalized PF (MPF)]. We develop the algorithms using the robot operating system (ROS), build a realistic simulation platform using ROS Gazebo incorporating real-world bathymetry, and evaluate the performance of these four Bayesian bathymetry-based AUV localization approaches on real-world lake data. The simulation allows the evaluation of algorithms with accurate knowledge of the robot's true location, which is otherwise infeasible to obtain underwater in the field. By relying on the data from a depth sensor and echo sounder, the localization algorithms overcome challenges faced by visual landmark-based localization. Our results show the efficacy of each algorithm under a variety of conditions, with the MPF being the most accurate in general.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 2","pages":"985-1000"},"PeriodicalIF":3.8,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward Using Fuzzy Grey Cognitive Maps in Manned and Autonomous Collision Avoidance at Sea","authors":"Mateusz Gil;Katarzyna Poczęta;Krzysztof Wróbel;Zaili Yang;Pengfei Chen","doi":"10.1109/JOE.2024.3516095","DOIUrl":"https://doi.org/10.1109/JOE.2024.3516095","url":null,"abstract":"With Maritime Autonomous Surface Ships (MASS) slowly but steadily nearing full-scale implementation, the question of their safety persists. Regardless of being a disruptive technology, they will likely be subject to the same factors shaping their safety performance as manned ships nowadays are. Yet, the impact of these factors may be different in each case. The current study presents an application of Fuzzy Grey Cognitive Maps (FGCMs) to the comparative evaluation of factors affecting collision avoidance at sea. To this end, subject matter experts have been elicited, and the data obtained from them have been analyzed, concerning how changes in the intensity of given factors would affect safety performance. The obtained results showed that with the use of FGCM, it was possible to model the relative impact of selected factors both on a specific phase of the maritime collision avoidance process as well as on its entirety. The conducted analysis shows noticeable variability of the influence of some factors, depending on the timing of their activation during the process (time dependence), and using FGCM, it was possible to assess its quantification. Furthermore, the results indicate that greater differences can be found between the factors’ impact on phases of an encounter than between manned and autonomous ships. The outcome of this study may be found interesting for all parties involved in maritime safety modeling as well as working on the forthcoming introduction of autonomous ships.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 2","pages":"1210-1230"},"PeriodicalIF":3.8,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10937359","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Exact-Time Trajectory Tracking Control for Autonomous Surface Vessels","authors":"Susan Basnet;Saurabh Kumar;Shashi Ranjan Kumar","doi":"10.1109/JOE.2025.3529062","DOIUrl":"https://doi.org/10.1109/JOE.2025.3529062","url":null,"abstract":"In this article, we address the trajectory tracking control problem of an autonomous surface vessel with limited information about its system dynamics in the presence of bounded external disturbances. We propose nonlinear robust control strategies that guarantee the surface vessel converges to its desired path precisely at an exact time, regardless of its initial engagement geometry with respect to the path, provided it is within a feasible region respecting the physical constraints of the vehicle. Furthermore, the proposed strategy offers an appealing feature of allowing the selection of the convergence time before the start of the engagement. This provides the control designer with an additional degree of freedom to tailor the convergence time a priori according to specific mission requirements. We first provide a design using the knowledge of the upper bound of the disturbances. Later, we extend the design for unknown disturbances. Finally, numerical simulations elucidate the merits of the proposed strategy.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 2","pages":"1184-1195"},"PeriodicalIF":3.8,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Enhancement of an Active Sonar Classifier Using Mode-Connectivity-Based Fine-Tuning Under Data Set Shifts","authors":"Geunhwan Kim;Youngmin Choo","doi":"10.1109/JOE.2025.3558812","DOIUrl":"https://doi.org/10.1109/JOE.2025.3558812","url":null,"abstract":"In supervised-learning-based active sonar classification overcoming data set shifts through standard fine-tuning is challenging due to the limited size and diversity of active sonar data sets. To address this challenge, we propose a robust fine-tuning method using mode connectivity (RoFT-MC), which mitigates two key problems in standard fine-tuning: catastrophic forgetting and negative transfer. RoFT-MC constructs a mode connectivity curve between two independently pretrained models. For adaptation, the curve parameters are optimized using in situ test data rather than training data. RoFT-MC effectively adapts to the shifted test data set while maintaining its performance on the training data set by ensuring that the fine-tuned weights remain on the curve. In addition, we utilize a feasible fine-tuning data set composed of test clutter samples combined with training target samples instead of unavailable test target samples to avoid biased predictions. In the efficacy examination standard fine-tuning failed to adapt to the shifted test data set, whereas RoFT-MC demonstrated a significant performance improvement. Specifically, RoFT-MC increased the probability of detection from 0.2710 to 0.6438 at a false alarm rate of 0.1, while maintaining comparable performance on the training data set.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 3","pages":"2327-2344"},"PeriodicalIF":3.8,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144646659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiangle Sonar Imaging for 3-D Reconstruction of Underwater Objects in Shadowless Environments","authors":"Zhijie Tang;Yang Li;Chi Wang","doi":"10.1109/JOE.2025.3535563","DOIUrl":"https://doi.org/10.1109/JOE.2025.3535563","url":null,"abstract":"In the realm of underwater detection technologies, reconstructing the three-dimensional structure of underwater objects is crucial for applications such as underwater target tracking, target locking, and navigational guidance. As a primary tool for underwater detection, acoustical imaging faces significant challenges in recovering the three-dimensional structure of objects from two-dimensional images. Current 3-D reconstruction methods mainly focus on reconstructing objects at the riverbed, overlooking the reconstruction of objects in the water in the absence of shadows. This study introduces a multiangle shape and height recovery method for such specific situations. By fixing the sonar detection angle and utilizing ViewPoint software to measure the contours of objects at different depths, a superimposition technique for two-dimensional sonar images was developed to achieve three-dimensional reconstruction of shadowless sonar image data. The proposed method is specifically designed for scenarios with diffuse echoes, where the sound waves scatter from rough surfaces rather than reflect specularly from smooth surfaces. This limitation ensures the method's applicability to objects lacking strong mirror-like reflections. This technique has been validated on three different categories of targets, with the reconstructed 3-D models accurately compared to the actual size and shape of the targets, demonstrating the method's effectiveness and providing a theoretical and methodological foundation for the 3-D reconstruction of underwater sonar targets.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 2","pages":"1344-1355"},"PeriodicalIF":3.8,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Li;Yajie Bai;Xuerong Cui;Lei Li;Bin Jiang;Shibao Li;Jungang Yang
{"title":"Oceanic 3-D Thermohaline Field Reconstruction With Multidimensional Features Using SABNN","authors":"Juan Li;Yajie Bai;Xuerong Cui;Lei Li;Bin Jiang;Shibao Li;Jungang Yang","doi":"10.1109/JOE.2025.3535591","DOIUrl":"https://doi.org/10.1109/JOE.2025.3535591","url":null,"abstract":"Aiming at the problems of missing data and outliers in ocean observations and incomplete characterization of thermohaline related features, a 3-D thermohaline reconstruction model of the ocean based on multisource data are proposed. Multisource data from remote sensing and Current and Pressure recording Inverse Echo Sounders were used to analyze the projection relationship between 12-D features, such as sea surface temperature, bidirectional propagation time, and seafloor current velocity, and the distribution of ocean temperature and salinity at different depths (10–1000 m). A Bayesian optimization algorithmic framework is used to evaluate and gradually remove uncertainty from currently known data during the iterative process by extracting network parameters from the approximate probability distribution. More informed decision making improves the stability of the iterative process and reconstruction. In addition, a self-attention mechanism is introduced to dynamically focus on the dependencies between features of different dimensions by calculating the correlation matrix between features at arbitrary locations, enabling the model to more comprehensively characterize the thermohaline distribution and its changes. A Self-attentive Bayesian neural network (SABNN) model is established through empirical regression. The reconstructed model is validated using observational data from the Gulf of Mexico, and the experimental results show that the SABNN model has a significant improvement in temperature and salinity reconstruction accuracy compared with other network models or methods, with the RMSE and <inline-formula><tex-math>$R^{2}$</tex-math></inline-formula> improved by more than 29.68%, 21.14% and 31.01%, 37.33%, respectively.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 2","pages":"1273-1289"},"PeriodicalIF":3.8,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AO-UOD: A Novel Paradigm for Underwater Object Detection Using Acousto–Optic Fusion","authors":"Fengxue Yu;Fengqi Xiao;Congcong Li;En Cheng;Fei Yuan","doi":"10.1109/JOE.2025.3529121","DOIUrl":"https://doi.org/10.1109/JOE.2025.3529121","url":null,"abstract":"Autonomous underwater vehicles can carry multiple sensors, such as optical cameras and sonars, providing a common platform for underwater multimodal object detection. High-resolution optical images contain color information but are not applicable to turbid water environments. In contrast, acoustical waves are highly penetrating and travel long distances, making them suitable for low-light, turbid underwater environments, but sonar imaging has low resolution. The combination of the two can play to their respective advantages. This article presents a novel paradigm for underwater object detection using acousto–optic fusion (AO-UOD). Given that there is no publicly available data set, this article first constructs a paired data set for fusing optical and sonar images for underwater object detection. Paired sonar images and optical images were acquired by aligning the simulated plane of the ocean bottom. Based on this, a dual-stream interactive object detection network is designed. The network utilizes the structures of the fusion backbone, dual neck, and dual head to establish cross-modal information interaction between acoustical and optical. The attention interactive twin-branch fusion module is used to realize the fusion between features. Experimental results on the data collected show that AO-UOD can effectively fuse optical and sonar images to achieve robust detection performance. The multimodal method can utilize more information and possesses greater advantages over the unimodal method. This research not only provides a solid theoretical foundation for future multimodal object detection in marine environments but also points out the direction of technology development in practical applications.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 2","pages":"919-940"},"PeriodicalIF":3.8,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yan Liu;Yue Zhao;Bin Yu;Changsheng Zhu;Guanying Huo;Qingwu Li
{"title":"An Improved YOLOv8-Based Shallow Sea Creatures Object Detection Method","authors":"Yan Liu;Yue Zhao;Bin Yu;Changsheng Zhu;Guanying Huo;Qingwu Li","doi":"10.1109/JOE.2025.3538954","DOIUrl":"https://doi.org/10.1109/JOE.2025.3538954","url":null,"abstract":"With the development and utilization of marine resources, object detection in shallow sea environments becomes crucial. In real underwater environments, targets are often affected by motion blur or appear clustered, increasing detection difficulty. To address this problem, we propose an improved YOLOv8-based shallow sea creatures object detection method. We integrate receptive-field coordinate attention (RFCA) into the cross-stage partial bottleneck with the two convolutions (C2f) module of YOLOv8, creating the RFCA-enhanced C2f (C2f_RFCA). This enhancement improves feature extraction and fusion by leveraging multiscale receptive fields and refined feature fusion strategies, enabling better detection of blurred and occluded objects. The C2f_RFCA module captures both local and global features, enhancing detection accuracy in complex underwater scenarios. We additionally devised an improved dynamic head by substituting the deformable ConvNets version two (DCNv2) with DCNv3, forming dynamic head with DCNv3. This upgrade increases the flexibility of feature mapping and improves accuracy in detecting densely clustered objects by allowing adaptive receptive fields and enhancing boundary delineation. To evaluate the algorithm performance, we trained it on real-world underwater object detection data sets and conducted generalization experiments on detecting underwater objects, the underwater robot professional competition 2020 and underwater target detection and classification 2020 data sets. Experimental results show that, compared with YOLOv8n, our method increases mAP@0.5 by 1.9%, 1.7%, 4.3%, and 3.3%, and mAP@0.5:0.95 by 2.9%, 2.2%, 3.8%, and 5.0% in the four data sets. The proposed method significantly improves object detection accuracy for organisms in complex marine environments.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 2","pages":"817-834"},"PeriodicalIF":3.8,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SCN: A Novel Underwater Images Enhancement Method Based on Single Channel Network Model","authors":"Fuheng Zhou;Siqing Zhang;Yulong Huang;Pengsen Zhu;Yonggang Zhang","doi":"10.1109/JOE.2024.3474924","DOIUrl":"https://doi.org/10.1109/JOE.2024.3474924","url":null,"abstract":"Light is absorbed, reflected, and refracted in an underwater environment due to the interaction between water and light. The red and blue channels in an image are attenuated due to these interactions. The red, green, and blue channels are typically employed as inputs for deep learning models, and the color casts, which result from different attenuation rates of the three channels, may affect the model's generalization performance. Besides, the color casts existing in the reference images will impact the deep-learning models. To address these challenges, a single channel network (SCN) model is introduced, which exclusively employs the green channel as its input, and is unaffected by the attenuations in the red and blue channels. An innovative feature processing module is presented, in which the characteristics of transformers and convolutional layers are fused to capture nonlinear relationships among the red, green, and blue channels. The public EUVP and LSUI data set experiments show that the proposed SCN model achieves competitive results with the existing best three channel models for the case of slight signal attenuation, and outperforms the existing state of arts three-channel models for the case of strong signal attenuation. Furthermore, the proposed model is trained on the self-built noncolor biased underwater image data set and is also tested on the public UCCS data set with three different types of color casts, whose experimental results exhibit balanced color distribution.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 2","pages":"758-775"},"PeriodicalIF":3.8,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Laser Doppler Velocimetry for 3-D Seawater Velocity Measurement Using a Single Wavelength","authors":"Lili Jiang;Xianglong Hao;Xinyu Zhang;Ran Song;Zhijun Zhang;Bingbing Li;Guangbing Yang;Xuejun Xiong;Juan Su;Chi Wu","doi":"10.1109/JOE.2025.3553941","DOIUrl":"https://doi.org/10.1109/JOE.2025.3553941","url":null,"abstract":"In this article, we have experimentally demonstrated a laser Doppler velocimetry (3D-LDV) system capable of measuring 3-D flow velocities, employing a single emission wavelength and four photodetectors for capturing light scattered by particles in seawater. The optical measurement volume of the system is cylindrical and possesses dimensions that are significantly smaller than those of traditional acoustic Doppler systems—1.02 mm in diameter and 15.40 mm in length. This compact size renders the system particularly advantageous for applications demanding high spatial resolution, such as the observation of fine-scale turbulence. The performance of the 3D-LDV system was evaluated using a precision-controlled towing system in static seawater. It exhibited a measurement velocity range of 0.02–3.78 m/s, with a maximum relative error of 3.75%, a relative standard deviation of 1.49%, and an average directional angle deviation of 0.45° for angle changes within ±10°.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 3","pages":"2200-2208"},"PeriodicalIF":3.8,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144646511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}