Yao Haiyang , Guo Ruige , Zhao Zhongda , Zang Yuzhang , Zhao Xiaobo , Lei Tao , Wang Haiyan
{"title":"U-TransCNN: A U-shape transformer-CNN fusion model for underwater image enhancement","authors":"Yao Haiyang , Guo Ruige , Zhao Zhongda , Zang Yuzhang , Zhao Xiaobo , Lei Tao , Wang Haiyan","doi":"10.1016/j.displa.2025.103047","DOIUrl":null,"url":null,"abstract":"<div><div>Underwater imaging faces significant challenges due to nonuniform optical absorption and scattering, resulting in visual quality issues like color distortion, contrast reduction, and image blurring. These factors hinder the accurate capture and clear depiction of underwater imagery. To address these complexities, we propose U-TransCNN, a U-shape Transformer- Convolutional Neural Networks (CNN) model, designed to enhance underwater images by integrating the strengths of CNNs and Transformers. The core of U-TransCNN is the Global-Detail Feature Synchronization Fusion Module. This innovative component enhances global color and contrast while meticulously preserving the intricate texture details, ensuring that both macroscopic and microscopic aspects of the image are enhanced in unison. Then we design the Multiscale Detail Fusion Block to aggregate a richer spectrum of feature information using a variety of convolution kernels. Furthermore, our optimization strategy is augmented with a joint loss function, adynamic approach allowing the model to assign varying weights to the loss associated with different pixel points, depending on their loss magnitude. Six experiments (including reference and non-reference) on three public underwater datasets confirm that U-TransCNN comprehensively surpasses other contemporary state-of-the-art deep learning algorithms, demonstrating marked improvement in visualization quality and quantization parameters of underwater images. Our code is available at <span><span>https://github.com/GuoRuige/UTransCNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103047"},"PeriodicalIF":3.7000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225000848","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Underwater imaging faces significant challenges due to nonuniform optical absorption and scattering, resulting in visual quality issues like color distortion, contrast reduction, and image blurring. These factors hinder the accurate capture and clear depiction of underwater imagery. To address these complexities, we propose U-TransCNN, a U-shape Transformer- Convolutional Neural Networks (CNN) model, designed to enhance underwater images by integrating the strengths of CNNs and Transformers. The core of U-TransCNN is the Global-Detail Feature Synchronization Fusion Module. This innovative component enhances global color and contrast while meticulously preserving the intricate texture details, ensuring that both macroscopic and microscopic aspects of the image are enhanced in unison. Then we design the Multiscale Detail Fusion Block to aggregate a richer spectrum of feature information using a variety of convolution kernels. Furthermore, our optimization strategy is augmented with a joint loss function, adynamic approach allowing the model to assign varying weights to the loss associated with different pixel points, depending on their loss magnitude. Six experiments (including reference and non-reference) on three public underwater datasets confirm that U-TransCNN comprehensively surpasses other contemporary state-of-the-art deep learning algorithms, demonstrating marked improvement in visualization quality and quantization parameters of underwater images. Our code is available at https://github.com/GuoRuige/UTransCNN.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.