基于扩散和物理先验融合的水下序列图像增强

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Haochen Hu, Yanrui Bin, Chih-yung Wen, Bing Wang
{"title":"基于扩散和物理先验融合的水下序列图像增强","authors":"Haochen Hu,&nbsp;Yanrui Bin,&nbsp;Chih-yung Wen,&nbsp;Bing Wang","doi":"10.1016/j.inffus.2025.103365","DOIUrl":null,"url":null,"abstract":"<div><div>Although learning-based Underwater Image Enhancement (UIE) methods have demonstrated its remarkable performance, several issues remain to be addressed. A critical research gap is that different water effects are not properly removed, including color bias, low contrast, and blur. This is mainly due to the synthetic-real domain gap of the training data. They are either (1) real underwater images but with synthetic pseudo-labels or (2) synthetic underwater images although with accurate labels. However, it is extremely challenging to collect real-world data with true labels, where the water should be removed to obtain true references. Besides, the inter-frame consistency is not preserved because the previous works are designed for single-image enhancement. To address these two issues, a novel UIE framework fusing both diffusion and physics priors is present in this work. The extensive prior knowledge embedded in the pre-trained video diffusion model is leveraged for the first time to achieve zero-shot generalization from synthetic to real-world UIE task, including both single-frame quality and inter-frame consistency. In addition, a synthetic data augmentation strategy based on the physical imaging model is proposed to further alleviate the synthetic-real inconsistency. Qualitative and quantitative experiments on various real-world underwater scenes demonstrate the significance of our approach, producing results superior to existing works in terms of both visual fidelity and quantitative metrics.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103365"},"PeriodicalIF":15.5000,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Underwater sequential images enhancement via diffusion and physics priors fusion\",\"authors\":\"Haochen Hu,&nbsp;Yanrui Bin,&nbsp;Chih-yung Wen,&nbsp;Bing Wang\",\"doi\":\"10.1016/j.inffus.2025.103365\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Although learning-based Underwater Image Enhancement (UIE) methods have demonstrated its remarkable performance, several issues remain to be addressed. A critical research gap is that different water effects are not properly removed, including color bias, low contrast, and blur. This is mainly due to the synthetic-real domain gap of the training data. They are either (1) real underwater images but with synthetic pseudo-labels or (2) synthetic underwater images although with accurate labels. However, it is extremely challenging to collect real-world data with true labels, where the water should be removed to obtain true references. Besides, the inter-frame consistency is not preserved because the previous works are designed for single-image enhancement. To address these two issues, a novel UIE framework fusing both diffusion and physics priors is present in this work. The extensive prior knowledge embedded in the pre-trained video diffusion model is leveraged for the first time to achieve zero-shot generalization from synthetic to real-world UIE task, including both single-frame quality and inter-frame consistency. In addition, a synthetic data augmentation strategy based on the physical imaging model is proposed to further alleviate the synthetic-real inconsistency. Qualitative and quantitative experiments on various real-world underwater scenes demonstrate the significance of our approach, producing results superior to existing works in terms of both visual fidelity and quantitative metrics.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"124 \",\"pages\":\"Article 103365\"},\"PeriodicalIF\":15.5000,\"publicationDate\":\"2025-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525004385\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525004385","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

尽管基于学习的水下图像增强(UIE)方法已经证明了其显著的性能,但仍有几个问题有待解决。一个关键的研究缺口是不同的水效果没有被适当地去除,包括色彩偏差,低对比度和模糊。这主要是由于训练数据的合成域与真实域的差距造成的。它们要么是(1)真实的水下图像,但带有合成的伪标签;要么是(2)合成的水下图像,但带有准确的标签。然而,用真正的标签收集真实世界的数据是极具挑战性的,其中应该除去水以获得真正的参考。此外,由于之前的作品都是针对单幅图像进行增强设计的,没有保持帧间的一致性。为了解决这两个问题,本研究提出了一种新的UIE框架,融合了扩散和物理先验。首次利用预训练视频扩散模型中嵌入的广泛先验知识,实现了从合成到真实世界UIE任务的零镜头泛化,包括单帧质量和帧间一致性。此外,提出了一种基于物理成像模型的合成数据增强策略,以进一步缓解合成与真实不一致的问题。对各种真实世界水下场景的定性和定量实验证明了我们方法的重要性,在视觉保真度和定量指标方面产生的结果优于现有作品。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Underwater sequential images enhancement via diffusion and physics priors fusion

Underwater sequential images enhancement via diffusion and physics priors fusion
Although learning-based Underwater Image Enhancement (UIE) methods have demonstrated its remarkable performance, several issues remain to be addressed. A critical research gap is that different water effects are not properly removed, including color bias, low contrast, and blur. This is mainly due to the synthetic-real domain gap of the training data. They are either (1) real underwater images but with synthetic pseudo-labels or (2) synthetic underwater images although with accurate labels. However, it is extremely challenging to collect real-world data with true labels, where the water should be removed to obtain true references. Besides, the inter-frame consistency is not preserved because the previous works are designed for single-image enhancement. To address these two issues, a novel UIE framework fusing both diffusion and physics priors is present in this work. The extensive prior knowledge embedded in the pre-trained video diffusion model is leveraged for the first time to achieve zero-shot generalization from synthetic to real-world UIE task, including both single-frame quality and inter-frame consistency. In addition, a synthetic data augmentation strategy based on the physical imaging model is proposed to further alleviate the synthetic-real inconsistency. Qualitative and quantitative experiments on various real-world underwater scenes demonstrate the significance of our approach, producing results superior to existing works in terms of both visual fidelity and quantitative metrics.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信