Sabah Badri-Hoeher;Thomas Wilts;Lukas Schaefer;Jonni Westphalen;Julian Winkler;Cedric Isokeit;Andrej Harlakin;Maurice Hott;Julius Maximilian Placzek;Stefan Marx;Martin Volz;Erik Maehle;Peter Adam Hoeher
{"title":"Multiheterogeneous AUV Swarm Technology Exemplified by the MAUS Project: Cooperation, Mission Planning and Hybrid Communication","authors":"Sabah Badri-Hoeher;Thomas Wilts;Lukas Schaefer;Jonni Westphalen;Julian Winkler;Cedric Isokeit;Andrej Harlakin;Maurice Hott;Julius Maximilian Placzek;Stefan Marx;Martin Volz;Erik Maehle;Peter Adam Hoeher","doi":"10.1109/JOE.2024.3451241","DOIUrl":"https://doi.org/10.1109/JOE.2024.3451241","url":null,"abstract":"The mobile autonomous underwater system (MAUS) aims to create next-generation vehicles, focusing on improved intelligence, mission operations, and application scenarios. Accordingly, two types of autonomous underwater vehicles (AUVs) have been developed to operate and collaborate in various applications. The first AUV, named “Hansel,” has hovering capabilities and is tailored to tasks, such as object inspection and detection. The second AUV, called “Gretel,” has going capabilities and is suitable for tasks, such as seafloor mapping. The two AUV types are equipped with different sensors, allowing them to perform distinct tasks simultaneously as a team. The going AUV has a robust navigation system that includes an inertial navigation system, an ultra-short baseline unit, and a Doppler velocity log for dead reckoning. In contrast, the hovering AUV only has a low-cost micro-electro-mechanical system. Gretel's navigation unit improves Hansel's navigation, along with data transfer between the vehicles. The AUVs rely on a hybrid communication system that integrates acoustic, inductive, and optical links to combine the strengths of each technology. To accomplish their individual goals, the AUVs participate in joint mission planning, utilizing a variety of sensors and tasks that are specific to each AUV. This approach is commonly referred to as multiheterogeneous AUV swarm technology. The multiheterogeneous concept developed in the MAUS project was experimentally validated in Kiel Fjord, located in the southwest Baltic Sea, and in La Spezia, Mediterranean Sea.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"228-251"},"PeriodicalIF":3.8,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10742613","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Vignetting-Correction-Based Underwater Image Enhancement Method for AUV With Artificial Light","authors":"Zetian Mi;Shuaiyong Jiang;Yuanyuan Li;Huibing Wang;Xianping Fu;Zheng Liang;Peixian Zhuang","doi":"10.1109/JOE.2024.3463840","DOIUrl":"https://doi.org/10.1109/JOE.2024.3463840","url":null,"abstract":"Images captured by autonomous underwater vehicles (AUVs) are inherently affected by artificial light, which tends to generate distinctive footprint and biased veiling light on the foreground. Existing underwater image enhancement (UIE) methods do not take this serious problem into account. In practice, the enhanced results will lead to a severe performance drop, due to the challenging joint task of enhancing underwater images while correcting the vignetting phenomenon. To solve this issue, we propose a two-stage vignetting-correction driven UIE network (called VCU-Net), which consists of two subnetworks (vignetting-correction-net and restoration-net), to deal with the two joint tasks in a split way. Concretely, we first introduce a novel underwater imaging model that is more capable of describing the imaging process for underwater robot applications. Accordingly, sufficient underwater data with vignetting is conducted to train our VCU-Net. In addition, based on the intensity distribution statistics of the lighting footprint formed by artificial light, a radial gradient constrained loss is designed in the vignetting-correction-net, which facilitates the precise estimation of vignetting. To validate the performance, extensive experiments on both synthetic and real-world images captured with AUV show the effectiveness of the proposed novel method, which illustrates a great superiority against the state-of-the-art methods in real underwater world with complex illumination.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"213-227"},"PeriodicalIF":3.8,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2024 Index IEEE Journal of Oceanic Engineering Vol. 49","authors":"","doi":"10.1109/JOE.2024.3487337","DOIUrl":"https://doi.org/10.1109/JOE.2024.3487337","url":null,"abstract":"","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"49 4","pages":"1-28"},"PeriodicalIF":3.8,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10738844","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142555135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yakun Ju;Ling Li;Xian Zhong;Yuan Rao;Yanru Liu;Junyu Dong;Alex C. Kot
{"title":"Underwater Surface Normal Reconstruction via Cross-Grained Photometric Stereo Transformer","authors":"Yakun Ju;Ling Li;Xian Zhong;Yuan Rao;Yanru Liu;Junyu Dong;Alex C. Kot","doi":"10.1109/JOE.2024.3458110","DOIUrl":"https://doi.org/10.1109/JOE.2024.3458110","url":null,"abstract":"Modern ocean research necessitates high-precision 3-D underwater data acquisition. Photometric stereo is a critical technique for recovering high-resolution, dense surface normals of textureless objects, such as the seabed and underwater pipelines. This technique is fundamental for underwater robots engaged in ocean exploration and operational tasks. Traditional underwater photometric stereo methods account for distributed underwater media, such as light scattering. However, the deployment of devices in complex underwater environments (e.g., ocean currents) often results in misalignment and jitter among photometric stereo images. These challenges lead to inaccuracies in matching-based methods, particularly due to the lack of texture and varying illumination conditions. To address these issues, we propose the Cross-Grained Transformer Photometric Stereo (CGT-PS) Network. CGT-PS is designed to directly manage misaligned pixels caused by underwater jitter in an end-to-end manner. The proposed method consists of two main components: the local-grained and global-grained modules. The local-grained module utilizes a Shift operation to adjust pixels within a single-pixel span, effectively mitigating misalignment caused by motion without increasing computational cost. In contrast, the global-grained module performs nonlocal fusion learning, leveraging distant features to enhance the extraction of intricate structural details, cast shadows, and interreflection regions. Ablation studies confirm the efficacy of the proposed modules. Extensive experiments on photometric stereo benchmark data sets and real underwater photometric stereo samples demonstrate that our method achieves superior performance.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"192-203"},"PeriodicalIF":3.8,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Call for papers: Special Issue on the IEEE UT2025 Symposium","authors":"","doi":"10.1109/JOE.2024.3470608","DOIUrl":"https://doi.org/10.1109/JOE.2024.3470608","url":null,"abstract":"","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"49 4","pages":"1695-1696"},"PeriodicalIF":3.8,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10719024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142440852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Supervised Marine Organism Detection From Underwater Images","authors":"Jiahua Li;Wentao Yang;Shishi Qiao;Zhaorui Gu;Bing Zheng;Haiyong Zheng","doi":"10.1109/JOE.2024.3455565","DOIUrl":"https://doi.org/10.1109/JOE.2024.3455565","url":null,"abstract":"In recent years, in light of the significant progress in deep learning on general object detection, research on marine organism detection has become increasingly popular. However, manual annotation of marine organism images usually requires specialized expertise, resulting in a scarcity of labeled data for research purposes. In addition, the complex and dynamic marine environment leads to varying degrees of light absorption and scattering, causing severe degradation issues in the collected images. These factors hinder the acquisition of high-quality representations for subsequent detection objectives. To overcome the reliance on annotated marine data sets and derive high-quality representations from extensive unlabeled and degraded data, we propose a self-supervised marine organism detection (SMOD) framework. To the best of the authors' knowledge, it is the first time that self-supervised learning has been introduced into the task of marine organism object detection. Specifically, in order to improve the quality of learned image representation from degraded data, a set of underwater augmentation strategies to improve the perceptional quality of underwater images is designed. To further address the challenging issue posed by numerous marine objects and diverse backgrounds, an underwater attention module is elaborately devised such that the model prioritizes objects over backgrounds during representation learning. Experimental results on URPC2021 data set show that our SMOD achieves competitive performance in the marine organism object detection task.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"120-135"},"PeriodicalIF":3.8,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fanlin Yang;Haiyang Xu;Xianhai Bu;Chengkai Feng;Mingyi Gan
{"title":"A Robust Sidescan Sonar Bottom-Tracking Method Based on an Adaptive Threshold","authors":"Fanlin Yang;Haiyang Xu;Xianhai Bu;Chengkai Feng;Mingyi Gan","doi":"10.1109/JOE.2024.3455432","DOIUrl":"https://doi.org/10.1109/JOE.2024.3455432","url":null,"abstract":"Bottom tracking is an essential step in sidescan sonar image processing, which plays a crucial role in geometric distortion correction, geocoding, and image stitching. However, it is difficult to achieve accurate and automatic bottom tracking due to the influence of reflected surface echoes or suspensions in water. Therefore, this article proposes a robust and automatic bottom-tracking method considering multiple influencing factors. First, the bottom-tracking range is delineated by determining whether there are surface echoes. Then, the intensity difference between adjacent sampling intervals of the sonar image is calculated and bottom tracking is performed by an adaptive threshold. Finally, the bottom-tracking results are smoothed by combining them with the robust linear regression algorithm. Experimental results show that the detection accuracy of the proposed method is above 95%, which is higher than the results of the conventional threshold method (79.3%), Laplacian of Gaussain (LOG) operator (77.0%), and Canny operator (87.0%). The proposed method can adaptively adjust the threshold parameters and has a better bottom-tracking result with less computational complexity.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"370-379"},"PeriodicalIF":3.8,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charles W. Holland;Samuel Pinson;Daniel L. Orange;Cody R. Henderson
{"title":"Cross-Track Seabed Imaging and Buried Object Detection With a Multibeam Sonar","authors":"Charles W. Holland;Samuel Pinson;Daniel L. Orange;Cody R. Henderson","doi":"10.1109/JOE.2024.3452134","DOIUrl":"https://doi.org/10.1109/JOE.2024.3452134","url":null,"abstract":"What lies underneath the ocean floor is of interest to a wide variety of disciplines. The focus here is imaging the upper tens of meters of material beneath the seafloor. The sub-bottom profiler is a valuable tool to that end, producing a 2-D image in depth below the seafloor and along the track of the ship. Geoscientists and engineers are frequently interested in not only the subseabed beneath the ship but also either side or cross-track of the ship. We show an approach to cross-track imaging using a low-frequency multibeam sub-bottom profiler. The main result is the detection of a buried 0.2-m diameter pipeline at a range of nearly 1-km cross-track from the ship in 230-m water depth. This is a swath width coverage of eight times the water depth or nominally an angular range of ±76° from nadir. These results have implications not only for buried object detection but also for other potential applications including exploring seabed spatial variability.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"394-402"},"PeriodicalIF":3.8,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10716599","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Zhang;Xuerong Cui;Juan Li;Lei Li;Bin Jiang;Shibao Li;Jianhang Liu
{"title":"Temporal Temperature Profile Prediction Using Graph Convolutional Networks and Inverted Echosounder Measurements","authors":"Yu Zhang;Xuerong Cui;Juan Li;Lei Li;Bin Jiang;Shibao Li;Jianhang Liu","doi":"10.1109/JOE.2024.3429211","DOIUrl":"https://doi.org/10.1109/JOE.2024.3429211","url":null,"abstract":"Ocean temperature prediction is a prominent research topic in current ocean science. The empirical modal method, which is based on the inverse echosounder, is one of the most significant methods for analyzing the physical environment of the deep-sea sound layer. This method effectively inverts the temperature profile of the research area. In this article, we propose the time- and self-attention mechanism graph convolutional neural network (ASeTGCN) that uses the inverted data. Unlike traditional time-series forecasting methods, ASeTGCN utilizes graph convolutional networks to capture the inherent spatial correlation of the research area. It also employs self-attention mechanisms to address the nonuniformity of temperature profiles at varying depths. Lastly, it uses time-attention mechanisms to analyze the correlation of temperature profile sequences sampled at daily, weekly, and monthly frequencies. We conducted multiple comparative experiments and related ablation experiments on the proposed model, and our results indicate that the model can effectively extend the time series of temperature profiles in the research area with a root-mean-square error of only 0.19.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"31-44"},"PeriodicalIF":3.8,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lu Shen;Yuriy Zakharov;Benjamin Henson;Nils Morozs;Benoît Parrein;Paul D. Mitchell
{"title":"Target Detection Using Underwater Acoustic Communication Links","authors":"Lu Shen;Yuriy Zakharov;Benjamin Henson;Nils Morozs;Benoît Parrein;Paul D. Mitchell","doi":"10.1109/JOE.2024.3455414","DOIUrl":"https://doi.org/10.1109/JOE.2024.3455414","url":null,"abstract":"Underwater monitoring and surveillance systems are essential for underwater target detection, localization, and classification. The aim of this work is to investigate the possibility of target detection by using data transmission between communication nodes in an underwater acoustic (UWA) network, i.e, reusing acoustic communication signals for target detection. A new target detection method based on estimation of the time-varying channel impulse response between the communication transmitter(s) and receiver(s) is proposed and investigated. This is based on a lake experiment and numerical experiments using a simulator developed for modeling the time-varying UWA channel in the presence of a moving target. The proposed detection method provides a clear indication of a target crossing the communication link. A good similarity between results obtained in the numerical and lake experiments is observed.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"45-60"},"PeriodicalIF":3.8,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}