Intrinsic Decomposition with Robustly Separating and Restoring Colored Illumination.

IF 6.5
Hao Sha, Shining Ma, Tongtai Cao, Yu Han, Yu Liu, Yue Liu
{"title":"Intrinsic Decomposition with Robustly Separating and Restoring Colored Illumination.","authors":"Hao Sha, Shining Ma, Tongtai Cao, Yu Han, Yu Liu, Yue Liu","doi":"10.1109/TVCG.2025.3564229","DOIUrl":null,"url":null,"abstract":"<p><p>Intrinsic decomposition separates an image into reflectance and shading, which contributes to image editing, augmented reality, etc. Despite recent efforts dedicated to this field, effectively separating colored illumination from reflectance and correctly restoring it into shading remains an challenge. We propose a deep intrinsic decomposition method to address this issue. Specifically, by transforming intrinsic decomposition process in RGB image domains into the combination of intensity and chromaticity domains, we propose a novel macro intrinsic decomposition network framework. This framework enables the generation of finer intrinsic components through more relevant features propagation and more detailed sub-constraints guidance. In order to expand the macro network, we integrate multiple attention mechanism modules in key positions of encoders, which enhances the extraction of distinct features. We also propose a skip connection module based on specific deep features guidance, which can filter out features that are physically irrelevant to each intrinsic component. Our method not only outperforms state-of-the-art methods across multiple datasets, but also robustly separates illumination from reflectance and restores it into shading in various types of images. By leveraging our intrinsic images, we achieve visually superior image editing effects compared to other methods, while also being able to manipulate the inherent lighting of the original scene.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3564229","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Intrinsic decomposition separates an image into reflectance and shading, which contributes to image editing, augmented reality, etc. Despite recent efforts dedicated to this field, effectively separating colored illumination from reflectance and correctly restoring it into shading remains an challenge. We propose a deep intrinsic decomposition method to address this issue. Specifically, by transforming intrinsic decomposition process in RGB image domains into the combination of intensity and chromaticity domains, we propose a novel macro intrinsic decomposition network framework. This framework enables the generation of finer intrinsic components through more relevant features propagation and more detailed sub-constraints guidance. In order to expand the macro network, we integrate multiple attention mechanism modules in key positions of encoders, which enhances the extraction of distinct features. We also propose a skip connection module based on specific deep features guidance, which can filter out features that are physically irrelevant to each intrinsic component. Our method not only outperforms state-of-the-art methods across multiple datasets, but also robustly separates illumination from reflectance and restores it into shading in various types of images. By leveraging our intrinsic images, we achieve visually superior image editing effects compared to other methods, while also being able to manipulate the inherent lighting of the original scene.

具有鲁棒性分离和恢复彩色照明的内禀分解。
内在分解将图像分解为反射率和阴影,有助于图像编辑,增强现实等。尽管最近致力于这一领域的努力,有效地从反射中分离彩色照明并正确地将其恢复为阴影仍然是一个挑战。我们提出了一种深层内在分解方法来解决这个问题。具体而言,通过将RGB图像域的内禀分解过程转化为强度域和色度域的结合,提出了一种新的宏观内禀分解网络框架。该框架通过更相关的特征传播和更详细的子约束指导来生成更精细的内在组件。为了扩展宏网络,我们在编码器的关键位置集成了多个注意机制模块,增强了对显著特征的提取。我们还提出了一种基于特定深度特征指导的跳过连接模块,该模块可以过滤掉与每个内在成分在物理上无关的特征。我们的方法不仅在多个数据集上优于最先进的方法,而且还可以将光照从反射率中分离出来,并将其恢复为各种类型图像的阴影。通过利用我们的固有图像,与其他方法相比,我们实现了视觉上优越的图像编辑效果,同时还能够操纵原始场景的固有照明。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信