多照度色彩常数的色彩解耦

IF 8.3 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Wen Zhang;Zhenshan Tan;Li Zhang;Zhijiang Li
{"title":"多照度色彩常数的色彩解耦","authors":"Wen Zhang;Zhenshan Tan;Li Zhang;Zhijiang Li","doi":"10.1109/TCSVT.2024.3523019","DOIUrl":null,"url":null,"abstract":"Current multi-illumination color constancy methods typically estimate illumination for each pixel directly. However, according to the multi-illumination imaging equation, the color of each pixel is determined by various components, including the innate color of the scene content, the colors of multiple illuminations, and the weightings of these illuminations. Failing to distinguish between these components results in color coupling. On the one hand, there is color coupling between illumination and scene content, where estimations are easily misled by the colors of the content, and the distribution of the estimated illuminations is relatively scattered. On the other hand, there is color coupling between illuminations, where estimations are susceptible to interference from high-frequency and heterogeneous illumination colors, and the local contrast is low. To address color coupling, we propose a Color Decoupling Network (CDNet) that includes a Content Color Awareness Module (CCAM) and a Contrast HArmonization Module (CHAM). CCAM learns scene content color priors, decoupling the colors of content and illuminations by providing the model with the color features of the content, thereby reducing out-of-gamut estimations and enhancing consistency. CHAM constrains feature representation, decoupling illuminants by mutual calibration between adjacent features. CHAM utilizes spatial correlation to make the model more sensitive to the relationships between neighboring features and utilizes illumination disparity degree to guide feature classification. By enhancing the uniqueness of homogeneous illumination features and the distinctiveness of heterogeneous illumination features, CHAM improves local edge contrast. Additionally, by allocating fine-grained margin coefficients to emphasize the soft distinctiveness of similar illumination features, further enhancing local contrast. Extensive experiments on single- and multi-illumination benchmark datasets demonstrate that the proposed method achieves superior performance.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"4087-4099"},"PeriodicalIF":8.3000,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Color Decoupling for Multi-Illumination Color Constancy\",\"authors\":\"Wen Zhang;Zhenshan Tan;Li Zhang;Zhijiang Li\",\"doi\":\"10.1109/TCSVT.2024.3523019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current multi-illumination color constancy methods typically estimate illumination for each pixel directly. However, according to the multi-illumination imaging equation, the color of each pixel is determined by various components, including the innate color of the scene content, the colors of multiple illuminations, and the weightings of these illuminations. Failing to distinguish between these components results in color coupling. On the one hand, there is color coupling between illumination and scene content, where estimations are easily misled by the colors of the content, and the distribution of the estimated illuminations is relatively scattered. On the other hand, there is color coupling between illuminations, where estimations are susceptible to interference from high-frequency and heterogeneous illumination colors, and the local contrast is low. To address color coupling, we propose a Color Decoupling Network (CDNet) that includes a Content Color Awareness Module (CCAM) and a Contrast HArmonization Module (CHAM). CCAM learns scene content color priors, decoupling the colors of content and illuminations by providing the model with the color features of the content, thereby reducing out-of-gamut estimations and enhancing consistency. CHAM constrains feature representation, decoupling illuminants by mutual calibration between adjacent features. CHAM utilizes spatial correlation to make the model more sensitive to the relationships between neighboring features and utilizes illumination disparity degree to guide feature classification. By enhancing the uniqueness of homogeneous illumination features and the distinctiveness of heterogeneous illumination features, CHAM improves local edge contrast. Additionally, by allocating fine-grained margin coefficients to emphasize the soft distinctiveness of similar illumination features, further enhancing local contrast. Extensive experiments on single- and multi-illumination benchmark datasets demonstrate that the proposed method achieves superior performance.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 5\",\"pages\":\"4087-4099\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2024-12-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10816431/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10816431/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

当前的多照度颜色常数方法通常是直接估计每个像素的照度。然而,根据多照度成像方程,每个像素的颜色是由各种成分决定的,包括场景内容的固有颜色,多个照度的颜色,以及这些照度的权重。不能区分这些组件会导致颜色耦合。一方面,照度与场景内容之间存在颜色耦合,估计容易受到内容颜色的误导,估计的照度分布相对分散。另一方面,照明之间存在颜色耦合,其中估计容易受到高频和异构照明颜色的干扰,并且局部对比度较低。为了解决颜色耦合问题,我们提出了一个颜色解耦网络(CDNet),该网络包括一个内容颜色感知模块(CCAM)和一个对比度协调模块(CHAM)。CCAM学习场景内容的颜色先验,通过向模型提供内容的颜色特征来解耦内容的颜色和照明,从而减少色域外估计,增强一致性。CHAM约束特征表示,通过相邻特征之间的相互校准来解耦光源。CHAM利用空间相关性使模型对相邻特征之间的关系更加敏感,利用光照视差程度指导特征分类。通过增强均匀光照特征的唯一性和非均匀光照特征的唯一性,提高局部边缘对比度。此外,通过分配细粒度的边缘系数来强调相似照明特征的柔和性,进一步增强局部对比度。在单照度和多照度基准数据集上的大量实验表明,该方法具有较好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Color Decoupling for Multi-Illumination Color Constancy
Current multi-illumination color constancy methods typically estimate illumination for each pixel directly. However, according to the multi-illumination imaging equation, the color of each pixel is determined by various components, including the innate color of the scene content, the colors of multiple illuminations, and the weightings of these illuminations. Failing to distinguish between these components results in color coupling. On the one hand, there is color coupling between illumination and scene content, where estimations are easily misled by the colors of the content, and the distribution of the estimated illuminations is relatively scattered. On the other hand, there is color coupling between illuminations, where estimations are susceptible to interference from high-frequency and heterogeneous illumination colors, and the local contrast is low. To address color coupling, we propose a Color Decoupling Network (CDNet) that includes a Content Color Awareness Module (CCAM) and a Contrast HArmonization Module (CHAM). CCAM learns scene content color priors, decoupling the colors of content and illuminations by providing the model with the color features of the content, thereby reducing out-of-gamut estimations and enhancing consistency. CHAM constrains feature representation, decoupling illuminants by mutual calibration between adjacent features. CHAM utilizes spatial correlation to make the model more sensitive to the relationships between neighboring features and utilizes illumination disparity degree to guide feature classification. By enhancing the uniqueness of homogeneous illumination features and the distinctiveness of heterogeneous illumination features, CHAM improves local edge contrast. Additionally, by allocating fine-grained margin coefficients to emphasize the soft distinctiveness of similar illumination features, further enhancing local contrast. Extensive experiments on single- and multi-illumination benchmark datasets demonstrate that the proposed method achieves superior performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信