看透水:水下图像增强色彩校正的启发式建模

IF 8.3 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Junyu Fan;Jie Xu;Jingchun Zhou;Danling Meng;Yi Lin
{"title":"看透水:水下图像增强色彩校正的启发式建模","authors":"Junyu Fan;Jie Xu;Jingchun Zhou;Danling Meng;Yi Lin","doi":"10.1109/TCSVT.2024.3516781","DOIUrl":null,"url":null,"abstract":"Color cast is one of the main degradations in underwater images. Existing data-driven methods, while capable of learning color correction rules from large datasets, often overlook the imaging characteristics and light behavior in underwater environments, making them unable to accurately restore colors in complex water bodies. To address this, we use color constancy and an underwater imaging model to heuristically model the underwater environment for accurate color restoration. On one hand, we propose a multi-scale joint prior network architecture to fully explore the rich feature-level information at different scales in underwater images. This is used to fit the complex parameters of the underwater imaging model, deriving high-quality potential undegraded images. On the other hand, to tackle the challenges of color distortion caused by complex imaging factors in different water environments, we estimate the background light of the water body through the color constancy of underwater objects and dynamically incorporate it into the underwater imaging model as a prior. This not only guides the learning process more effectively but also allows the model to consider key aspects of underwater optical propagation, making it adaptable to different water environments and improving the color accuracy of the enhanced images. We have also conducted extensive experiments to demonstrate the effectiveness of the proposed method, which not only achieves the best overall performance in qualitative analysis and quantitative comparison but also boasts the best color accuracy and the fastest inference speed. The code is available at <uri>https://github.com/JunyuFan/MJPNet</uri>.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"4039-4054"},"PeriodicalIF":8.3000,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"See Through Water: Heuristic Modeling Toward Color Correction for Underwater Image Enhancement\",\"authors\":\"Junyu Fan;Jie Xu;Jingchun Zhou;Danling Meng;Yi Lin\",\"doi\":\"10.1109/TCSVT.2024.3516781\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Color cast is one of the main degradations in underwater images. Existing data-driven methods, while capable of learning color correction rules from large datasets, often overlook the imaging characteristics and light behavior in underwater environments, making them unable to accurately restore colors in complex water bodies. To address this, we use color constancy and an underwater imaging model to heuristically model the underwater environment for accurate color restoration. On one hand, we propose a multi-scale joint prior network architecture to fully explore the rich feature-level information at different scales in underwater images. This is used to fit the complex parameters of the underwater imaging model, deriving high-quality potential undegraded images. On the other hand, to tackle the challenges of color distortion caused by complex imaging factors in different water environments, we estimate the background light of the water body through the color constancy of underwater objects and dynamically incorporate it into the underwater imaging model as a prior. This not only guides the learning process more effectively but also allows the model to consider key aspects of underwater optical propagation, making it adaptable to different water environments and improving the color accuracy of the enhanced images. We have also conducted extensive experiments to demonstrate the effectiveness of the proposed method, which not only achieves the best overall performance in qualitative analysis and quantitative comparison but also boasts the best color accuracy and the fastest inference speed. The code is available at <uri>https://github.com/JunyuFan/MJPNet</uri>.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 5\",\"pages\":\"4039-4054\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2024-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10798483/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10798483/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

色偏是水下图像退化的主要原因之一。现有的数据驱动方法虽然能够从大型数据集中学习颜色校正规则,但往往忽略了水下环境中的成像特征和光行为,无法准确恢复复杂水体中的颜色。为了解决这个问题,我们使用颜色常数和水下成像模型来启发式地模拟水下环境,以实现准确的颜色恢复。一方面,我们提出了一种多尺度联合先验网络架构,以充分挖掘水下图像在不同尺度上丰富的特征级信息。该方法用于对水下成像模型的复杂参数进行拟合,得到高质量的潜在未退化图像。另一方面,为了解决不同水环境下复杂成像因素造成的颜色失真问题,我们通过水下物体的颜色恒常性来估计水体的背景光,并将其作为先验动态地纳入水下成像模型。这不仅可以更有效地指导学习过程,还可以使模型考虑水下光传播的关键方面,使其适应不同的水环境,提高增强图像的颜色精度。我们还进行了大量的实验来证明该方法的有效性,该方法不仅在定性分析和定量比较中取得了最佳的综合性能,而且具有最佳的颜色精度和最快的推理速度。代码可在https://github.com/JunyuFan/MJPNet上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
See Through Water: Heuristic Modeling Toward Color Correction for Underwater Image Enhancement
Color cast is one of the main degradations in underwater images. Existing data-driven methods, while capable of learning color correction rules from large datasets, often overlook the imaging characteristics and light behavior in underwater environments, making them unable to accurately restore colors in complex water bodies. To address this, we use color constancy and an underwater imaging model to heuristically model the underwater environment for accurate color restoration. On one hand, we propose a multi-scale joint prior network architecture to fully explore the rich feature-level information at different scales in underwater images. This is used to fit the complex parameters of the underwater imaging model, deriving high-quality potential undegraded images. On the other hand, to tackle the challenges of color distortion caused by complex imaging factors in different water environments, we estimate the background light of the water body through the color constancy of underwater objects and dynamically incorporate it into the underwater imaging model as a prior. This not only guides the learning process more effectively but also allows the model to consider key aspects of underwater optical propagation, making it adaptable to different water environments and improving the color accuracy of the enhanced images. We have also conducted extensive experiments to demonstrate the effectiveness of the proposed method, which not only achieves the best overall performance in qualitative analysis and quantitative comparison but also boasts the best color accuracy and the fastest inference speed. The code is available at https://github.com/JunyuFan/MJPNet.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信