{"title":"多照度色彩常数的色彩解耦","authors":"Wen Zhang;Zhenshan Tan;Li Zhang;Zhijiang Li","doi":"10.1109/TCSVT.2024.3523019","DOIUrl":null,"url":null,"abstract":"Current multi-illumination color constancy methods typically estimate illumination for each pixel directly. However, according to the multi-illumination imaging equation, the color of each pixel is determined by various components, including the innate color of the scene content, the colors of multiple illuminations, and the weightings of these illuminations. Failing to distinguish between these components results in color coupling. On the one hand, there is color coupling between illumination and scene content, where estimations are easily misled by the colors of the content, and the distribution of the estimated illuminations is relatively scattered. On the other hand, there is color coupling between illuminations, where estimations are susceptible to interference from high-frequency and heterogeneous illumination colors, and the local contrast is low. To address color coupling, we propose a Color Decoupling Network (CDNet) that includes a Content Color Awareness Module (CCAM) and a Contrast HArmonization Module (CHAM). CCAM learns scene content color priors, decoupling the colors of content and illuminations by providing the model with the color features of the content, thereby reducing out-of-gamut estimations and enhancing consistency. CHAM constrains feature representation, decoupling illuminants by mutual calibration between adjacent features. CHAM utilizes spatial correlation to make the model more sensitive to the relationships between neighboring features and utilizes illumination disparity degree to guide feature classification. By enhancing the uniqueness of homogeneous illumination features and the distinctiveness of heterogeneous illumination features, CHAM improves local edge contrast. Additionally, by allocating fine-grained margin coefficients to emphasize the soft distinctiveness of similar illumination features, further enhancing local contrast. Extensive experiments on single- and multi-illumination benchmark datasets demonstrate that the proposed method achieves superior performance.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"4087-4099"},"PeriodicalIF":8.3000,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Color Decoupling for Multi-Illumination Color Constancy\",\"authors\":\"Wen Zhang;Zhenshan Tan;Li Zhang;Zhijiang Li\",\"doi\":\"10.1109/TCSVT.2024.3523019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current multi-illumination color constancy methods typically estimate illumination for each pixel directly. However, according to the multi-illumination imaging equation, the color of each pixel is determined by various components, including the innate color of the scene content, the colors of multiple illuminations, and the weightings of these illuminations. Failing to distinguish between these components results in color coupling. On the one hand, there is color coupling between illumination and scene content, where estimations are easily misled by the colors of the content, and the distribution of the estimated illuminations is relatively scattered. On the other hand, there is color coupling between illuminations, where estimations are susceptible to interference from high-frequency and heterogeneous illumination colors, and the local contrast is low. To address color coupling, we propose a Color Decoupling Network (CDNet) that includes a Content Color Awareness Module (CCAM) and a Contrast HArmonization Module (CHAM). CCAM learns scene content color priors, decoupling the colors of content and illuminations by providing the model with the color features of the content, thereby reducing out-of-gamut estimations and enhancing consistency. CHAM constrains feature representation, decoupling illuminants by mutual calibration between adjacent features. CHAM utilizes spatial correlation to make the model more sensitive to the relationships between neighboring features and utilizes illumination disparity degree to guide feature classification. By enhancing the uniqueness of homogeneous illumination features and the distinctiveness of heterogeneous illumination features, CHAM improves local edge contrast. Additionally, by allocating fine-grained margin coefficients to emphasize the soft distinctiveness of similar illumination features, further enhancing local contrast. Extensive experiments on single- and multi-illumination benchmark datasets demonstrate that the proposed method achieves superior performance.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 5\",\"pages\":\"4087-4099\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2024-12-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10816431/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10816431/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Color Decoupling for Multi-Illumination Color Constancy
Current multi-illumination color constancy methods typically estimate illumination for each pixel directly. However, according to the multi-illumination imaging equation, the color of each pixel is determined by various components, including the innate color of the scene content, the colors of multiple illuminations, and the weightings of these illuminations. Failing to distinguish between these components results in color coupling. On the one hand, there is color coupling between illumination and scene content, where estimations are easily misled by the colors of the content, and the distribution of the estimated illuminations is relatively scattered. On the other hand, there is color coupling between illuminations, where estimations are susceptible to interference from high-frequency and heterogeneous illumination colors, and the local contrast is low. To address color coupling, we propose a Color Decoupling Network (CDNet) that includes a Content Color Awareness Module (CCAM) and a Contrast HArmonization Module (CHAM). CCAM learns scene content color priors, decoupling the colors of content and illuminations by providing the model with the color features of the content, thereby reducing out-of-gamut estimations and enhancing consistency. CHAM constrains feature representation, decoupling illuminants by mutual calibration between adjacent features. CHAM utilizes spatial correlation to make the model more sensitive to the relationships between neighboring features and utilizes illumination disparity degree to guide feature classification. By enhancing the uniqueness of homogeneous illumination features and the distinctiveness of heterogeneous illumination features, CHAM improves local edge contrast. Additionally, by allocating fine-grained margin coefficients to emphasize the soft distinctiveness of similar illumination features, further enhancing local contrast. Extensive experiments on single- and multi-illumination benchmark datasets demonstrate that the proposed method achieves superior performance.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.