基于生成对抗网络的频域增强和色彩补偿水下图像增强

IF 3.5 2区 工程技术 Q2 OPTICS
Jiaxin Li, Zheping Yan
{"title":"基于生成对抗网络的频域增强和色彩补偿水下图像增强","authors":"Jiaxin Li,&nbsp;Zheping Yan","doi":"10.1016/j.optlaseng.2025.109102","DOIUrl":null,"url":null,"abstract":"<div><div>In complex underwater environments, due to the large number of suspended particles as well as the varying scattering and absorption characteristics of light in different waters, underwater images are subject to diverse forms of mixed attenuation. Such as color bias, poor contrast, and degradation of details. This greatly limits the operational efficiency of underwater systems. To this purpose, we propose a new generative adversarial network based frequency domain enhancement and color compensation underwater image enhancement method, which performs image enhancement simultaneously in both the frequency and spatial domains. Specifically, we designed a dual-encoder architecture with a structural encoder and a color compensation encoder in the generator. We embed a Multi-scale Dense Feature Aggregation (MDFA) module in the dual encoder, to make different encoders extract rich semantic and contextual information according to different task requirements. In the decoder, we designed a based Frequency-domain Fourier Enhancement Module (FFEM) and a Complementary-color Prior Color-compensation Module (CPCM). The FFEM conducts color correction and detail enhancement of the features captured by structural encoder within the frequency domain. In the spatial domain, the CPCM utilizes the color compensation information extracted by the color compensation encoder to adjust the enhancement results of the FFEM. Abundant experiments indicate that the suggested method significantly improves the degraded image quality, exhibits superior generalization performance, and outperforms the state-of-the-art methods in both quantitative and qualitative evaluations. Our code is available at <span><span>https://github.com/LiJiaxin011/FCC-GAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109102"},"PeriodicalIF":3.5000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generative adversarial network based frequency domain enhancement and color compensation underwater image enhancement\",\"authors\":\"Jiaxin Li,&nbsp;Zheping Yan\",\"doi\":\"10.1016/j.optlaseng.2025.109102\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In complex underwater environments, due to the large number of suspended particles as well as the varying scattering and absorption characteristics of light in different waters, underwater images are subject to diverse forms of mixed attenuation. Such as color bias, poor contrast, and degradation of details. This greatly limits the operational efficiency of underwater systems. To this purpose, we propose a new generative adversarial network based frequency domain enhancement and color compensation underwater image enhancement method, which performs image enhancement simultaneously in both the frequency and spatial domains. Specifically, we designed a dual-encoder architecture with a structural encoder and a color compensation encoder in the generator. We embed a Multi-scale Dense Feature Aggregation (MDFA) module in the dual encoder, to make different encoders extract rich semantic and contextual information according to different task requirements. In the decoder, we designed a based Frequency-domain Fourier Enhancement Module (FFEM) and a Complementary-color Prior Color-compensation Module (CPCM). The FFEM conducts color correction and detail enhancement of the features captured by structural encoder within the frequency domain. In the spatial domain, the CPCM utilizes the color compensation information extracted by the color compensation encoder to adjust the enhancement results of the FFEM. Abundant experiments indicate that the suggested method significantly improves the degraded image quality, exhibits superior generalization performance, and outperforms the state-of-the-art methods in both quantitative and qualitative evaluations. Our code is available at <span><span>https://github.com/LiJiaxin011/FCC-GAN</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49719,\"journal\":{\"name\":\"Optics and Lasers in Engineering\",\"volume\":\"193 \",\"pages\":\"Article 109102\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Lasers in Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0143816625002878\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0143816625002878","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

摘要

在复杂的水下环境中,由于大量的悬浮粒子,以及不同水域对光的散射和吸收特性不同,使得水下图像呈现出多种形式的混合衰减。例如颜色偏差,对比度差,细节退化。这极大地限制了水下系统的操作效率。为此,我们提出了一种新的基于生成对抗网络的频域增强和色彩补偿水下图像增强方法,该方法在频域和空间域同时进行图像增强。具体来说,我们设计了一个双编码器架构,在生成器中包含一个结构编码器和一个颜色补偿编码器。我们在双编码器中嵌入了一个多尺度密集特征聚合(MDFA)模块,使不同的编码器能够根据不同的任务需求提取丰富的语义和上下文信息。在解码器中,我们设计了基于频域傅立叶增强模块(FFEM)和互补色先验色彩补偿模块(CPCM)。FFEM在频域内对结构编码器捕获的特征进行色彩校正和细节增强。在空间域中,CPCM利用颜色补偿编码器提取的颜色补偿信息来调整FFEM的增强结果。大量实验表明,该方法显著改善了退化的图像质量,具有优异的泛化性能,在定量和定性评价方面都优于现有方法。我们的代码可在https://github.com/LiJiaxin011/FCC-GAN上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Generative adversarial network based frequency domain enhancement and color compensation underwater image enhancement
In complex underwater environments, due to the large number of suspended particles as well as the varying scattering and absorption characteristics of light in different waters, underwater images are subject to diverse forms of mixed attenuation. Such as color bias, poor contrast, and degradation of details. This greatly limits the operational efficiency of underwater systems. To this purpose, we propose a new generative adversarial network based frequency domain enhancement and color compensation underwater image enhancement method, which performs image enhancement simultaneously in both the frequency and spatial domains. Specifically, we designed a dual-encoder architecture with a structural encoder and a color compensation encoder in the generator. We embed a Multi-scale Dense Feature Aggregation (MDFA) module in the dual encoder, to make different encoders extract rich semantic and contextual information according to different task requirements. In the decoder, we designed a based Frequency-domain Fourier Enhancement Module (FFEM) and a Complementary-color Prior Color-compensation Module (CPCM). The FFEM conducts color correction and detail enhancement of the features captured by structural encoder within the frequency domain. In the spatial domain, the CPCM utilizes the color compensation information extracted by the color compensation encoder to adjust the enhancement results of the FFEM. Abundant experiments indicate that the suggested method significantly improves the degraded image quality, exhibits superior generalization performance, and outperforms the state-of-the-art methods in both quantitative and qualitative evaluations. Our code is available at https://github.com/LiJiaxin011/FCC-GAN.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Optics and Lasers in Engineering
Optics and Lasers in Engineering 工程技术-光学
CiteScore
8.90
自引率
8.70%
发文量
384
审稿时长
42 days
期刊介绍: Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods. Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following: -Optical Metrology- Optical Methods for 3D visualization and virtual engineering- Optical Techniques for Microsystems- Imaging, Microscopy and Adaptive Optics- Computational Imaging- Laser methods in manufacturing- Integrated optical and photonic sensors- Optics and Photonics in Life Science- Hyperspectral and spectroscopic methods- Infrared and Terahertz techniques
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信