{"title":"SFP-Net: Scatter-feature-perception network for underwater image enhancement","authors":"Zhengjie Wang , Jiaying Guo , Junchen Zhang , Guokai Zhang , Tianrun Yu , Shangzixin Zhao , Dandan Zhu","doi":"10.1016/j.displa.2025.103132","DOIUrl":null,"url":null,"abstract":"<div><div>Inspired by atmospheric scattering light models, substantial progresses have been achieved in deep learning-based methods for underwater image enhancement. However, these methods suffer from a deficiency in accurately modeling scattering information, which can incur some quality issues of visual perception. Moreover, insufficient attention to the key scene features leads to enhanced images that lack fine-grained information. To alleviate these challenges, we propose an efficient scatter-feature-perception network(SFP-Net). It consists of two core ideas: firstly, the dark channel map is synergistically combined with the K-value map to precisely perceive scattering light features within the scene. Subsequently, multi-scale cross-space learning is used to capture the inter-dependencies between channels and spatial positions, facilitating the perception of scene feature information. Besides, the adaptive scatter feature loss is formulated on the basis of the atmospheric scattering model, which evaluates the impact of scattered light. Extensive experimental results demonstrate that our model effectively mitigates the influence of underwater environmental factors, circumvents interference caused by image depth of field, and exhibits superior performance in terms of adaptability and reliability. Notably, our model achieves maximum values of 29.76 and 0.91 on the PSNR and SSIM metrics, which indicates superior enhancement effects compared to existing methods. Meanwhile, the UCIQE and UIQM metrics also reached 0.431 and 2.763 respectively, which are more in line with human visual preferences.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103132"},"PeriodicalIF":3.4000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225001696","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Inspired by atmospheric scattering light models, substantial progresses have been achieved in deep learning-based methods for underwater image enhancement. However, these methods suffer from a deficiency in accurately modeling scattering information, which can incur some quality issues of visual perception. Moreover, insufficient attention to the key scene features leads to enhanced images that lack fine-grained information. To alleviate these challenges, we propose an efficient scatter-feature-perception network(SFP-Net). It consists of two core ideas: firstly, the dark channel map is synergistically combined with the K-value map to precisely perceive scattering light features within the scene. Subsequently, multi-scale cross-space learning is used to capture the inter-dependencies between channels and spatial positions, facilitating the perception of scene feature information. Besides, the adaptive scatter feature loss is formulated on the basis of the atmospheric scattering model, which evaluates the impact of scattered light. Extensive experimental results demonstrate that our model effectively mitigates the influence of underwater environmental factors, circumvents interference caused by image depth of field, and exhibits superior performance in terms of adaptability and reliability. Notably, our model achieves maximum values of 29.76 and 0.91 on the PSNR and SSIM metrics, which indicates superior enhancement effects compared to existing methods. Meanwhile, the UCIQE and UIQM metrics also reached 0.431 and 2.763 respectively, which are more in line with human visual preferences.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.