{"title":"基于生成对抗网络的频域增强和色彩补偿水下图像增强","authors":"Jiaxin Li, Zheping Yan","doi":"10.1016/j.optlaseng.2025.109102","DOIUrl":null,"url":null,"abstract":"<div><div>In complex underwater environments, due to the large number of suspended particles as well as the varying scattering and absorption characteristics of light in different waters, underwater images are subject to diverse forms of mixed attenuation. Such as color bias, poor contrast, and degradation of details. This greatly limits the operational efficiency of underwater systems. To this purpose, we propose a new generative adversarial network based frequency domain enhancement and color compensation underwater image enhancement method, which performs image enhancement simultaneously in both the frequency and spatial domains. Specifically, we designed a dual-encoder architecture with a structural encoder and a color compensation encoder in the generator. We embed a Multi-scale Dense Feature Aggregation (MDFA) module in the dual encoder, to make different encoders extract rich semantic and contextual information according to different task requirements. In the decoder, we designed a based Frequency-domain Fourier Enhancement Module (FFEM) and a Complementary-color Prior Color-compensation Module (CPCM). The FFEM conducts color correction and detail enhancement of the features captured by structural encoder within the frequency domain. In the spatial domain, the CPCM utilizes the color compensation information extracted by the color compensation encoder to adjust the enhancement results of the FFEM. Abundant experiments indicate that the suggested method significantly improves the degraded image quality, exhibits superior generalization performance, and outperforms the state-of-the-art methods in both quantitative and qualitative evaluations. Our code is available at <span><span>https://github.com/LiJiaxin011/FCC-GAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109102"},"PeriodicalIF":3.5000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generative adversarial network based frequency domain enhancement and color compensation underwater image enhancement\",\"authors\":\"Jiaxin Li, Zheping Yan\",\"doi\":\"10.1016/j.optlaseng.2025.109102\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In complex underwater environments, due to the large number of suspended particles as well as the varying scattering and absorption characteristics of light in different waters, underwater images are subject to diverse forms of mixed attenuation. Such as color bias, poor contrast, and degradation of details. This greatly limits the operational efficiency of underwater systems. To this purpose, we propose a new generative adversarial network based frequency domain enhancement and color compensation underwater image enhancement method, which performs image enhancement simultaneously in both the frequency and spatial domains. Specifically, we designed a dual-encoder architecture with a structural encoder and a color compensation encoder in the generator. We embed a Multi-scale Dense Feature Aggregation (MDFA) module in the dual encoder, to make different encoders extract rich semantic and contextual information according to different task requirements. In the decoder, we designed a based Frequency-domain Fourier Enhancement Module (FFEM) and a Complementary-color Prior Color-compensation Module (CPCM). The FFEM conducts color correction and detail enhancement of the features captured by structural encoder within the frequency domain. In the spatial domain, the CPCM utilizes the color compensation information extracted by the color compensation encoder to adjust the enhancement results of the FFEM. Abundant experiments indicate that the suggested method significantly improves the degraded image quality, exhibits superior generalization performance, and outperforms the state-of-the-art methods in both quantitative and qualitative evaluations. Our code is available at <span><span>https://github.com/LiJiaxin011/FCC-GAN</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49719,\"journal\":{\"name\":\"Optics and Lasers in Engineering\",\"volume\":\"193 \",\"pages\":\"Article 109102\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Lasers in Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0143816625002878\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0143816625002878","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
Generative adversarial network based frequency domain enhancement and color compensation underwater image enhancement
In complex underwater environments, due to the large number of suspended particles as well as the varying scattering and absorption characteristics of light in different waters, underwater images are subject to diverse forms of mixed attenuation. Such as color bias, poor contrast, and degradation of details. This greatly limits the operational efficiency of underwater systems. To this purpose, we propose a new generative adversarial network based frequency domain enhancement and color compensation underwater image enhancement method, which performs image enhancement simultaneously in both the frequency and spatial domains. Specifically, we designed a dual-encoder architecture with a structural encoder and a color compensation encoder in the generator. We embed a Multi-scale Dense Feature Aggregation (MDFA) module in the dual encoder, to make different encoders extract rich semantic and contextual information according to different task requirements. In the decoder, we designed a based Frequency-domain Fourier Enhancement Module (FFEM) and a Complementary-color Prior Color-compensation Module (CPCM). The FFEM conducts color correction and detail enhancement of the features captured by structural encoder within the frequency domain. In the spatial domain, the CPCM utilizes the color compensation information extracted by the color compensation encoder to adjust the enhancement results of the FFEM. Abundant experiments indicate that the suggested method significantly improves the degraded image quality, exhibits superior generalization performance, and outperforms the state-of-the-art methods in both quantitative and qualitative evaluations. Our code is available at https://github.com/LiJiaxin011/FCC-GAN.
期刊介绍:
Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods.
Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following:
-Optical Metrology-
Optical Methods for 3D visualization and virtual engineering-
Optical Techniques for Microsystems-
Imaging, Microscopy and Adaptive Optics-
Computational Imaging-
Laser methods in manufacturing-
Integrated optical and photonic sensors-
Optics and Photonics in Life Science-
Hyperspectral and spectroscopic methods-
Infrared and Terahertz techniques