SFP-Net: Scatter-feature-perception network for underwater image enhancement

IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Zhengjie Wang , Jiaying Guo , Junchen Zhang , Guokai Zhang , Tianrun Yu , Shangzixin Zhao , Dandan Zhu
{"title":"SFP-Net: Scatter-feature-perception network for underwater image enhancement","authors":"Zhengjie Wang ,&nbsp;Jiaying Guo ,&nbsp;Junchen Zhang ,&nbsp;Guokai Zhang ,&nbsp;Tianrun Yu ,&nbsp;Shangzixin Zhao ,&nbsp;Dandan Zhu","doi":"10.1016/j.displa.2025.103132","DOIUrl":null,"url":null,"abstract":"<div><div>Inspired by atmospheric scattering light models, substantial progresses have been achieved in deep learning-based methods for underwater image enhancement. However, these methods suffer from a deficiency in accurately modeling scattering information, which can incur some quality issues of visual perception. Moreover, insufficient attention to the key scene features leads to enhanced images that lack fine-grained information. To alleviate these challenges, we propose an efficient scatter-feature-perception network(SFP-Net). It consists of two core ideas: firstly, the dark channel map is synergistically combined with the K-value map to precisely perceive scattering light features within the scene. Subsequently, multi-scale cross-space learning is used to capture the inter-dependencies between channels and spatial positions, facilitating the perception of scene feature information. Besides, the adaptive scatter feature loss is formulated on the basis of the atmospheric scattering model, which evaluates the impact of scattered light. Extensive experimental results demonstrate that our model effectively mitigates the influence of underwater environmental factors, circumvents interference caused by image depth of field, and exhibits superior performance in terms of adaptability and reliability. Notably, our model achieves maximum values of 29.76 and 0.91 on the PSNR and SSIM metrics, which indicates superior enhancement effects compared to existing methods. Meanwhile, the UCIQE and UIQM metrics also reached 0.431 and 2.763 respectively, which are more in line with human visual preferences.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103132"},"PeriodicalIF":3.4000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225001696","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Inspired by atmospheric scattering light models, substantial progresses have been achieved in deep learning-based methods for underwater image enhancement. However, these methods suffer from a deficiency in accurately modeling scattering information, which can incur some quality issues of visual perception. Moreover, insufficient attention to the key scene features leads to enhanced images that lack fine-grained information. To alleviate these challenges, we propose an efficient scatter-feature-perception network(SFP-Net). It consists of two core ideas: firstly, the dark channel map is synergistically combined with the K-value map to precisely perceive scattering light features within the scene. Subsequently, multi-scale cross-space learning is used to capture the inter-dependencies between channels and spatial positions, facilitating the perception of scene feature information. Besides, the adaptive scatter feature loss is formulated on the basis of the atmospheric scattering model, which evaluates the impact of scattered light. Extensive experimental results demonstrate that our model effectively mitigates the influence of underwater environmental factors, circumvents interference caused by image depth of field, and exhibits superior performance in terms of adaptability and reliability. Notably, our model achieves maximum values of 29.76 and 0.91 on the PSNR and SSIM metrics, which indicates superior enhancement effects compared to existing methods. Meanwhile, the UCIQE and UIQM metrics also reached 0.431 and 2.763 respectively, which are more in line with human visual preferences.
用于水下图像增强的散射特征感知网络
受大气散射光模型的启发,基于深度学习的水下图像增强方法取得了实质性进展。然而,这些方法在准确建模散射信息方面存在不足,从而导致视觉感知的质量问题。此外,对关键场景特征的关注不足导致增强图像缺乏细粒度信息。为了缓解这些挑战,我们提出了一种高效的散射特征感知网络(SFP-Net)。它包括两个核心思想:一是将暗通道图与k值图协同结合,精确感知场景内的散射光特征;随后,利用多尺度跨空间学习捕获通道和空间位置之间的相互依赖关系,促进场景特征信息的感知。此外,在大气散射模型的基础上,建立了自适应散射特征损失,评估了散射光的影响。大量的实验结果表明,我们的模型有效地减轻了水下环境因素的影响,规避了图像景深的干扰,在适应性和可靠性方面表现出优异的性能。值得注意的是,我们的模型在PSNR和SSIM指标上达到了29.76和0.91的最大值,这表明与现有方法相比,我们的模型具有更好的增强效果。同时,UCIQE和UIQM指标也分别达到0.431和2.763,更符合人类的视觉偏好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信