显示下相机图像恢复的散射效应建模

IF 11.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Binbin Song, Jiantao Zhou, Xiangyu Chen, Shuning Xu
{"title":"显示下相机图像恢复的散射效应建模","authors":"Binbin Song, Jiantao Zhou, Xiangyu Chen, Shuning Xu","doi":"10.1007/s11263-025-02454-y","DOIUrl":null,"url":null,"abstract":"<p>The under-display camera (UDC) technology furnishes users with an uninterrupted full-screen viewing experience, eliminating the need for notches or punch holes. However, the translucent properties of the display lead to substantial degradation in UDC images. This work addresses the challenge of restoring UDC images by specifically targeting the scattering effect induced by the display. We explicitly model this scattering phenomenon by treating the display as a homogeneous scattering medium. Leveraging this physical model, the image formation pipeline is enhanced to synthesize more realistic UDC images alongside corresponding ground-truth images, thereby constructing a more accurate UDC dataset. To counteract the scattering effect in the restoration process, we propose a dual-branch network. The scattering branch employs channel-wise self-attention to estimate the scattering parameters, while the image branch capitalizes on the local feature representation capabilities of CNNs to restore the degraded UDC images. Additionally, we introduce a novel channel-wise cross-attention fusion block that integrates global scattering information into the image branch, facilitating improved restoration. To further refine the model, we design a dark channel regularization loss during training to reduce the gap between the dark channel distributions of the restored and ground-truth images. Comprehensive experiments conducted on both synthetic and real-world datasets demonstrate the superiority of our approach over current state-of-the-art UDC restoration methods. Our source code is publicly available at: https://github.com/NamecantbeNULL/SRUDC_pp.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"33 1","pages":""},"PeriodicalIF":11.6000,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Modeling Scattering Effect for Under-Display Camera Image Restoration\",\"authors\":\"Binbin Song, Jiantao Zhou, Xiangyu Chen, Shuning Xu\",\"doi\":\"10.1007/s11263-025-02454-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The under-display camera (UDC) technology furnishes users with an uninterrupted full-screen viewing experience, eliminating the need for notches or punch holes. However, the translucent properties of the display lead to substantial degradation in UDC images. This work addresses the challenge of restoring UDC images by specifically targeting the scattering effect induced by the display. We explicitly model this scattering phenomenon by treating the display as a homogeneous scattering medium. Leveraging this physical model, the image formation pipeline is enhanced to synthesize more realistic UDC images alongside corresponding ground-truth images, thereby constructing a more accurate UDC dataset. To counteract the scattering effect in the restoration process, we propose a dual-branch network. The scattering branch employs channel-wise self-attention to estimate the scattering parameters, while the image branch capitalizes on the local feature representation capabilities of CNNs to restore the degraded UDC images. Additionally, we introduce a novel channel-wise cross-attention fusion block that integrates global scattering information into the image branch, facilitating improved restoration. To further refine the model, we design a dark channel regularization loss during training to reduce the gap between the dark channel distributions of the restored and ground-truth images. Comprehensive experiments conducted on both synthetic and real-world datasets demonstrate the superiority of our approach over current state-of-the-art UDC restoration methods. Our source code is publicly available at: https://github.com/NamecantbeNULL/SRUDC_pp.</p>\",\"PeriodicalId\":13752,\"journal\":{\"name\":\"International Journal of Computer Vision\",\"volume\":\"33 1\",\"pages\":\"\"},\"PeriodicalIF\":11.6000,\"publicationDate\":\"2025-05-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computer Vision\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11263-025-02454-y\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Vision","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11263-025-02454-y","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

显示屏下摄像头(UDC)技术为用户提供了不间断的全屏观看体验,消除了对缺口或打孔的需求。然而,显示器的半透明特性导致UDC图像的严重退化。这项工作通过专门针对显示器引起的散射效应来解决恢复UDC图像的挑战。我们通过将显示器视为均匀散射介质来明确地模拟这种散射现象。利用该物理模型,增强图像生成管道,与相应的真地图像一起合成更真实的UDC图像,从而构建更准确的UDC数据集。为了抵消恢复过程中的散射效应,我们提出了一个双分支网络。散射分支利用信道自关注来估计散射参数,而图像分支利用cnn的局部特征表示能力来恢复退化的UDC图像。此外,我们还引入了一种新的通道交叉注意融合块,将全局散射信息集成到图像分支中,从而提高了恢复效果。为了进一步改进模型,我们在训练过程中设计了暗通道正则化损失,以减小恢复后的暗通道分布与真实图像之间的差距。在合成和真实数据集上进行的综合实验表明,我们的方法优于当前最先进的UDC恢复方法。我们的源代码是公开的:https://github.com/NamecantbeNULL/SRUDC_pp。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Modeling Scattering Effect for Under-Display Camera Image Restoration

The under-display camera (UDC) technology furnishes users with an uninterrupted full-screen viewing experience, eliminating the need for notches or punch holes. However, the translucent properties of the display lead to substantial degradation in UDC images. This work addresses the challenge of restoring UDC images by specifically targeting the scattering effect induced by the display. We explicitly model this scattering phenomenon by treating the display as a homogeneous scattering medium. Leveraging this physical model, the image formation pipeline is enhanced to synthesize more realistic UDC images alongside corresponding ground-truth images, thereby constructing a more accurate UDC dataset. To counteract the scattering effect in the restoration process, we propose a dual-branch network. The scattering branch employs channel-wise self-attention to estimate the scattering parameters, while the image branch capitalizes on the local feature representation capabilities of CNNs to restore the degraded UDC images. Additionally, we introduce a novel channel-wise cross-attention fusion block that integrates global scattering information into the image branch, facilitating improved restoration. To further refine the model, we design a dark channel regularization loss during training to reduce the gap between the dark channel distributions of the restored and ground-truth images. Comprehensive experiments conducted on both synthetic and real-world datasets demonstrate the superiority of our approach over current state-of-the-art UDC restoration methods. Our source code is publicly available at: https://github.com/NamecantbeNULL/SRUDC_pp.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Computer Vision
International Journal of Computer Vision 工程技术-计算机:人工智能
CiteScore
29.80
自引率
2.10%
发文量
163
审稿时长
6 months
期刊介绍: The International Journal of Computer Vision (IJCV) serves as a platform for sharing new research findings in the rapidly growing field of computer vision. It publishes 12 issues annually and presents high-quality, original contributions to the science and engineering of computer vision. The journal encompasses various types of articles to cater to different research outputs. Regular articles, which span up to 25 journal pages, focus on significant technical advancements that are of broad interest to the field. These articles showcase substantial progress in computer vision. Short articles, limited to 10 pages, offer a swift publication path for novel research outcomes. They provide a quicker means for sharing new findings with the computer vision community. Survey articles, comprising up to 30 pages, offer critical evaluations of the current state of the art in computer vision or offer tutorial presentations of relevant topics. These articles provide comprehensive and insightful overviews of specific subject areas. In addition to technical articles, the journal also includes book reviews, position papers, and editorials by prominent scientific figures. These contributions serve to complement the technical content and provide valuable perspectives. The journal encourages authors to include supplementary material online, such as images, video sequences, data sets, and software. This additional material enhances the understanding and reproducibility of the published research. Overall, the International Journal of Computer Vision is a comprehensive publication that caters to researchers in this rapidly growing field. It covers a range of article types, offers additional online resources, and facilitates the dissemination of impactful research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信