IEEE Journal of Oceanic Engineering最新文献

筛选
英文 中文
WaterLUT: A Lightweight and Generalizable Framework for Real-Time Underwater Image Enhancement WaterLUT:用于实时水下图像增强的轻量级和可通用框架
IF 5.3 2区 工程技术
IEEE Journal of Oceanic Engineering Pub Date : 2026-04-01 Epub Date: 2026-02-20 DOI: 10.1109/JOE.2026.3652877
Xiuna Zeng;Zhenqi Fu;Peixian Zhuang;Xiaotong Tu;Yue Huang;Xinghao Ding
{"title":"WaterLUT: A Lightweight and Generalizable Framework for Real-Time Underwater Image Enhancement","authors":"Xiuna Zeng;Zhenqi Fu;Peixian Zhuang;Xiaotong Tu;Yue Huang;Xinghao Ding","doi":"10.1109/JOE.2026.3652877","DOIUrl":"https://doi.org/10.1109/JOE.2026.3652877","url":null,"abstract":"Recent years have witnessed notable progress in learning-based underwater image enhancement (UIE). However, many existing methods either fail to consistently deliver satisfactory results or incur high computational and memory costs, limiting their applicability in real-world underwater scenarios. This underscores the need for lightweight models capable of real-time processing while maintaining robust generalization across diverse underwater conditions. To address these challenges, we propose WaterLUT, a lightweight UIE framework that integrates the efficiency of 3-D lookup tables (3-D LUTs) with the adaptability of prompt learning techniques. WaterLUT adopts a dual-branch pipeline: the prompt-guided LUTs Global Enhancement (PLGE) branch leverages contextual features to generate adaptive 3-D LUTs for global color and contrast correction, while the prompt-guided convolutional neural network (CNN) detail enhancement branch employs a lightweight CNN to refine local distortions induced by wavelength-selective absorption and scattering effects. To further improve generalization, WaterLUT introduces a degradation prompt encoder and a corresponding degradation prompt-based feature adapter to learn degradation-specific prompts and dynamically recalibrate features during enhancement. Extensive experiments on four UIE benchmarks demonstrate that WaterLUT achieves state-of-the-art visual quality with only 0.03 M parameters, enabling real-time 1080 P enhancement on a single RTX 2080 Ti GPU. Moreover, cross-data set evaluations confirm its enhanced generalization ability under various underwater degradation types and conditions.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"51 2","pages":"1592-1606"},"PeriodicalIF":5.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147734721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Stage Spatial-Detail Underwater Image Enhancement Method Based on Semantic Labels 基于语义标签的两阶段水下图像空间细节增强方法
IF 5.3 2区 工程技术
IEEE Journal of Oceanic Engineering Pub Date : 2026-04-01 Epub Date: 2026-03-12 DOI: 10.1109/JOE.2026.3659749
Bowen Guan;Yan Wang;Yirou Li;Chunyan Wang
{"title":"Two-Stage Spatial-Detail Underwater Image Enhancement Method Based on Semantic Labels","authors":"Bowen Guan;Yan Wang;Yirou Li;Chunyan Wang","doi":"10.1109/JOE.2026.3659749","DOIUrl":"https://doi.org/10.1109/JOE.2026.3659749","url":null,"abstract":"Underwater image enhancement (UIE) can improve the effectiveness of applications, such as underwater object detection. In UIE, synthetic images or pseudoimages are commonly used as enhancement references. However, the synthetic images cannot simulate underwater scenes, and there is interdomain difference between synthetic images and the underwater environment. Since different enhancement methods produce different effects on different semantic regions of the same image, pseudolabels lose valuable information from other enhancement methods. Moreover, different semantic information has different requirements for enhanced effectiveness. If the enhancement reference is provided on the semantic scale instead of the image scale, the advantages of various enhancement methods will be better utilized. To achieve this, this article introduces a method of semantic labels, which uses the results of different enhancement methods to label different semantic regions of the same image to build a more comprehensive reference. At the same time, enhancement images are often restricted by insufficient texture detail in advanced applications. Existing UIE methods employ the encoder–decoder architecture, which is effective in encoding broad contextual information but not reliable in preserving spatial image details. To simultaneously capture spatial information and retain image details, a two-stage spatial and detail network is proposed, which integrates U-Net and original resolution network to enhance the network’s ability to retain details. The experimental results show that the proposed method has better qualitative and quantitative evaluation results than the existing methods. Ablation studies demonstrate the effectiveness of the various components.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"51 2","pages":"1623-1635"},"PeriodicalIF":5.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147734738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIE-DDPM: Underwater Image Enhancement Based on the Integration of Physical Model and Conditional Denoising Diffusion Probabilistic Model UIE-DDPM:基于物理模型和条件去噪扩散概率模型相结合的水下图像增强
IF 5.3 2区 工程技术
IEEE Journal of Oceanic Engineering Pub Date : 2026-04-01 Epub Date: 2026-01-28 DOI: 10.1109/JOE.2025.3635984
Baizhong Chen;Chonglei Wang;Chunyu Guo;Yumin Su
{"title":"UIE-DDPM: Underwater Image Enhancement Based on the Integration of Physical Model and Conditional Denoising Diffusion Probabilistic Model","authors":"Baizhong Chen;Chonglei Wang;Chunyu Guo;Yumin Su","doi":"10.1109/JOE.2025.3635984","DOIUrl":"https://doi.org/10.1109/JOE.2025.3635984","url":null,"abstract":"Underwater image enhancement (UIE) is crucial for underwater perception tasks but is challenged by complex physical degradations, including backscatter, wavelength-dependent attenuation, scattering, turbidity, and color cast. These factors severely reduce visibility and color fidelity in underwater scenes. This article presents a UIE method called UIE-DDPM, which is based on the conditional denoising diffusion probabilistic model and underwater physical model. UIE-DDPM innovatively enhances underwater images by integrating the Jaffe-McGlamery underwater physical model with the diffusion process and employing optical compensation as a conditional controller to regulate each iteration step with greater specificity. The UIE-DDPM consists of the variational autoencoder–optical compensation prediction network (VCP) and the underwater image semantic enhancement network (UISE). The VCP establishes the analytical relationship between optical compensation levels, performs Gaussian sampling and parameter renormalization, and estimates the distribution of optical compensation losses in the latent space. The UISE effectively enhances the essential semantics of degraded images by utilizing reference color-enhanced images as auxiliary information, thereby improving the understanding of details, contours, and contrasts. Meanwhile, the UIE-DDPM outperforms other baseline methods and generative adversarial network models in UIE, establishing a new state-of-the-art benchmark. In real-world application scenarios, the maximum underwater image quality metric increased by 1.336, and the maximum underwater color image quality evaluation increased by 0.158, demonstrating the effectiveness of the proposed method in enhancing underwater image quality under turbid conditions.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"51 2","pages":"1607-1622"},"PeriodicalIF":5.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147734749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UStyle: Waterbody Style Transfer of Underwater Scenes by Depth-Guided Feature Synthesis UStyle:通过深度引导特征合成的水下场景的水体风格转移
IF 5.3 2区 工程技术
IEEE Journal of Oceanic Engineering Pub Date : 2026-04-01 Epub Date: 2026-01-21 DOI: 10.1109/JOE.2025.3647814
Md Abu Bakr Siddique;Vaishnav Ramesh;Junliang Liu;Piyush Singh;Md Jahidul Islam
{"title":"UStyle: Waterbody Style Transfer of Underwater Scenes by Depth-Guided Feature Synthesis","authors":"Md Abu Bakr Siddique;Vaishnav Ramesh;Junliang Liu;Piyush Singh;Md Jahidul Islam","doi":"10.1109/JOE.2025.3647814","DOIUrl":"https://doi.org/10.1109/JOE.2025.3647814","url":null,"abstract":"The concept of “<italic>waterbody style</i>” transfer remains largely unexplored in the underwater imaging literature. Traditional image style transfer (STx) methods emphasize artistic and photorealistic blending that fails to preserve geometry in high-scattering underwater environments. The wavelength-dependent attenuation and depth-dependent backscattering of underwater optics further complicate STx learning from unpaired data. We introduce UStyle, the first data-driven framework for transferring waterbody styles across underwater images without needing reference images or explicit scene information. We propose a novel depth-aware whitening and coloring transform that incorporates physics-based waterbody synthesis for perceptually consistent stylization while preserving scene structure. We also integrate carefully designed loss functions to maintain color, lightness, structural integrity, frequency-domain features, and high-level content in VGG and CLIP spaces. Comprehensive experimental analyses show that UStyle surpasses state-of-the-art methods that rely solely on reconstruction loss. In addition, we present the <italic>UF7D data set</i>, a curated benchmark of high-resolution underwater images across seven waterbody styles.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"51 2","pages":"1574-1591"},"PeriodicalIF":5.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147734711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-in-the-Loop Segmentation of Multispecies Coral Imagery 多物种珊瑚图像的人在环分割
IF 5.3 2区 工程技术
IEEE Journal of Oceanic Engineering Pub Date : 2025-12-22 DOI: 10.1109/JOE.2025.3625691
Scarlett Raine;Ross Marchant;Brano Kusy;Frederic Maire;Niko Sünderhauf;Tobias Fischer
{"title":"Human-in-the-Loop Segmentation of Multispecies Coral Imagery","authors":"Scarlett Raine;Ross Marchant;Brano Kusy;Frederic Maire;Niko Sünderhauf;Tobias Fischer","doi":"10.1109/JOE.2025.3625691","DOIUrl":"https://doi.org/10.1109/JOE.2025.3625691","url":null,"abstract":"Marine surveys by robotic underwater and surface vehicles result in substantial quantities of coral reef imagery, however labeling these images is expensive and time-consuming for domain experts. Point label propagation is a technique that uses existing images labeled with sparse points to create augmented ground truth data, which can be used to train a semantic segmentation model. In this work, we show that recent advances in large foundation models facilitate the creation of augmented ground truth masks using only features extracted by the denoised version of the DIstillation of knowledge with NO labels version 2 (DINOv2) foundation model and K-nearest neighbors (KNN), without any pretraining. For images with extremely sparse labels, we use human-in-the-loop principles to enhance annotation efficiency: if there are five point labels per image, our method outperforms the prior state-of-the-art by 19.7% for mean intersection over union (mIoU). When human-in-the-loop labeling is not available, using the denoised DINOv2 features with a KNN still improves on the prior state-of-the-art by 5.8% for mIoU (five grid points). On the semantic segmentation task, we outperform the prior state-of-the-art by 13.5% for mIoU when only five point labels are used for point label propagation. In addition, we perform a comprehensive study into the number and placement of point labels, and make several recommendations for improving the efficiency of labeling images with points.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"51 1","pages":"762-779"},"PeriodicalIF":5.3,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146015943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Scale Attention Feature Pyramid Network for Challenged Underwater Object Detection 跨尺度注意力特征金字塔网络的水下目标检测
IF 5.3 2区 工程技术
IEEE Journal of Oceanic Engineering Pub Date : 2025-12-22 DOI: 10.1109/JOE.2024.3450532
Miao Yang;Jinyang Zhong;Hansen Zhang;Can Pan;Xinmiao Gao;Chenglong Gong
{"title":"Cross-Scale Attention Feature Pyramid Network for Challenged Underwater Object Detection","authors":"Miao Yang;Jinyang Zhong;Hansen Zhang;Can Pan;Xinmiao Gao;Chenglong Gong","doi":"10.1109/JOE.2024.3450532","DOIUrl":"https://doi.org/10.1109/JOE.2024.3450532","url":null,"abstract":"Underwater object detection (UOD) is more difficult than object detection in air due to the noise caused by irrelevant objects and textures, and the scale variation. These difficulties pose a higher challenge to the feature extraction capability of detectors. Feature pyramid network (FPN) enhances the scale detection capability of detectors, while attention mechanisms effectively suppress irrelevant features. We present a cross-scale attention feature pyramid network (CSAFPN) for UOD. A feature fusion guided (FFG) module is incorporated in the CSAFPN, which constructs cross-scale context information and simultaneously guides the enhancement of all feature maps. Compared to existing FPN-like architectures, CSAFPN excels not only in capturing cross-scale long-range dependencies but also in acquiring compact multi-scale feature maps that specifically emphasize target regions. Extensive experiments on the Brackish2019 data set show that CSAFPN can achieve consistent improvements on various backbones and detectors. Moreover, FFG can be seamlessly integrated into any FPN-like architecture, offering a cost-effective improvement in UOD, resulting in a 1.4% average precision (AP) increase for FPN, a 1.3% AP increase for PANet, and a 1.4% AP increase for neural architecture search-FPN.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"51 1","pages":"826-835"},"PeriodicalIF":5.3,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146015944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NA-UICDE: A Novel Adaptive Algorithm for Underwater Image Color Correction and Detail Enhancement 一种新的水下图像色彩校正和细节增强自适应算法
IF 5.3 2区 工程技术
IEEE Journal of Oceanic Engineering Pub Date : 2025-12-09 DOI: 10.1109/JOE.2025.3617906
Yuyun Chen;Wenguang He;Gangqiang Xiong;Junwu Li;Yaomin Wang
{"title":"NA-UICDE: A Novel Adaptive Algorithm for Underwater Image Color Correction and Detail Enhancement","authors":"Yuyun Chen;Wenguang He;Gangqiang Xiong;Junwu Li;Yaomin Wang","doi":"10.1109/JOE.2025.3617906","DOIUrl":"https://doi.org/10.1109/JOE.2025.3617906","url":null,"abstract":"Underwater images often suffer from color distortion and detail loss due to light absorption and scattering, which degrades visual quality and limits practical applications. To address these issues, a novel adaptive algorithm for underwater image color correction and detail enhancement is proposed. The algorithm first applies threshold stretching to adjust the grayscale range, enhancing contrast while mitigating the risk of localized overcompensation. Based on the color distribution, images are categorized into bluish and greenish tones, providing the foundation for the adaptive color compensation method (ACCM). The ACCM is designed to separately compensate different color channels, using the green channel as a reference to restore the most degraded channels while maintaining overall color balance. The compensation process is further constrained by the minimum color loss criterion to ensure consistent color fidelity. Furthermore, an edge detail enhancement method is formulated to recover fine details by amplifying intensity differences between the original image and its smoothed version. Extensive experiments on multiple underwater image data sets demonstrate that the proposed algorithm consistently outperforms state-of-the-art methods, achieving average improvements of 0.0021, 0.0646, 0.3677, and 0.0800 in underwater color image quality evaluation, underwater image quality metric, fog aware density evaluator, and colorfulness contrast fog density index metrics, respectively, underscoring its effectiveness and robustness across diverse underwater environments.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"51 1","pages":"794-806"},"PeriodicalIF":5.3,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146015918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UMono: Physical-Model-Informed Hybrid CNN–Transformer Framework for Underwater Monocular Depth Estimation 水下单目深度估计的物理模型-知情混合cnn -变压器框架
IF 5.3 2区 工程技术
IEEE Journal of Oceanic Engineering Pub Date : 2025-11-10 DOI: 10.1109/JOE.2025.3606045
Xupeng Wu;Jian Wang;Jing Wang;Shenghui Rong;Bo He
{"title":"UMono: Physical-Model-Informed Hybrid CNN–Transformer Framework for Underwater Monocular Depth Estimation","authors":"Xupeng Wu;Jian Wang;Jing Wang;Shenghui Rong;Bo He","doi":"10.1109/JOE.2025.3606045","DOIUrl":"https://doi.org/10.1109/JOE.2025.3606045","url":null,"abstract":"Underwater monocular depth estimation serves as the foundation for tasks such as 3-D reconstruction of underwater scenes. However, due to the water medium and the absorption and scattering of light in water, the underwater environment undergoes a distinctive imaging process, which presents challenges in accurately estimating depth from a single image. The existing methods fail to consider the unique characteristics of underwater environments, leading to inadequate estimation results and limited generalization performance. Furthermore, underwater depth estimation requires extracting and fusing both local and global features, which is not fully explored in existing methods. In this article, an end-to-end learning framework for underwater monocular depth estimation called UMono is presented, which incorporates underwater image formation model characteristics into the network architecture, and effectively utilizes both local and global features of an underwater image. Specifically, UMono consists of an encoder with a hybrid architecture of a convolutional neural network (CNN) and Transformer and a decoder guided by a medium transmission map. First, we develop an underwater deep feature extraction (UDFE) block, which leverages the CNN and Transformer in parallel to achieve comprehensive extraction of both local and global features. These features are effectively integrated via the proposed local–global feature fusion (LGFF) module. By stacking the UDFE block as the basic unit, we constructed a hybrid encoder that generates four-stage hierarchical features. Subsequently, the medium transmission map is incorporated into the network as underwater domain knowledge, together with the encoded hierarchical features, is fed into the underwater depth information aggregation (UDIA) module, which aggregates depth information from the physical model and the neural network by a proposed cross attention mechanism. Then, the aggregated features serve as the guiding information for each decoding stage, facilitating the model in achieving comprehensive scene understanding and precise depth estimation. The final estimated depth map is obtained through consecutive upsampling processing. Experimental results demonstrate that the proposed method is effective for underwater monocular depth estimation and outperforms the existing methods in both quantitative and qualitative analyses.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"51 1","pages":"780-793"},"PeriodicalIF":5.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146015930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2025 Index IEEE Journal of Oceanic Engineering 2025索引IEEE海洋工程学报
IF 5.3 2区 工程技术
IEEE Journal of Oceanic Engineering Pub Date : 2025-11-05 DOI: 10.1109/JOE.2025.3628413
{"title":"2025 Index IEEE Journal of Oceanic Engineering","authors":"","doi":"10.1109/JOE.2025.3628413","DOIUrl":"https://doi.org/10.1109/JOE.2025.3628413","url":null,"abstract":"","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 4","pages":"1-57"},"PeriodicalIF":5.3,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11230041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Image Enhancement for Underwater Object Detection in Various Domains 多领域水下目标检测联合图像增强
IF 5.3 2区 工程技术
IEEE Journal of Oceanic Engineering Pub Date : 2025-10-30 DOI: 10.1109/JOE.2025.3604170
Junjie Wen;Guidong Yang;Benyun Zhao;Lei Lei;Zhi Gao;Xi Chen;Ben M. Chen
{"title":"Joint Image Enhancement for Underwater Object Detection in Various Domains","authors":"Junjie Wen;Guidong Yang;Benyun Zhao;Lei Lei;Zhi Gao;Xi Chen;Ben M. Chen","doi":"10.1109/JOE.2025.3604170","DOIUrl":"https://doi.org/10.1109/JOE.2025.3604170","url":null,"abstract":"Underwater environments present significant challenges, such as image degradation and domain discrepancies, that severely impact object detection performance. Traditional approaches often use image enhancement as a preprocessing step, but this adds computational overhead, latency, and can even degrade detection accuracy. To address these issues, we propose a novel underwater object detection framework that jointly trains image enhancement within a multitask architecture. This framework employs a progressive training strategy to iteratively improve detection performance through enhancement and introduces a domain-adaptation mechanism to align features across domains at both image and object levels. Experimental results demonstrate that our method achieves state-of-the-art performance across diverse data sets, with real-time detection at 105.93 frames per second and over +15<inline-formula><tex-math>$%$</tex-math></inline-formula> mean average precision absolute improvement in unseen environments, underscoring its potential for real-world underwater applications.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"51 1","pages":"807-825"},"PeriodicalIF":5.3,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146015913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书