{"title":"ColorAssist:基于感知的色觉缺陷补偿的再着色","authors":"Liqun Lin;Shangxi Xie;Yanting Wang;Bolin Chen;Ying Xue;Xiahai Zhuang;Tiesong Zhao","doi":"10.1109/TIP.2025.3602643","DOIUrl":null,"url":null,"abstract":"Image enhancement methods have been widely studied to improve the visual quality of diverse images, implicitly assuming that all human observers have normal vision. However, a large population around the world suffers from Color Vision Deficiency (CVD). Enhancing images to compensate for their perceptions remains a challenging issue. Existing CVD compensation methods have two drawbacks: first, the available datasets and validations have not been rigorously tested by CVD individuals; second, these methods struggle to strike an optimal balance between contrast enhancement and naturalness preservation, which often results in suboptimal outcomes for individuals with CVD. To address these issues, we develop the first large-scale, CVD-individual-labeled dataset called FZU-CVDSet and a CVD-friendly recoloring algorithm called ColorAssist. In particular, we design a perception-guided feature extraction module and a perception-guided diffusion transformer module that jointly achieve efficient image recoloring for individuals with CVD. Comprehensive experiments on both FZU-CVDSet and subjective tests in hospitals demonstrate that the proposed ColorAssist closely aligns with the visual perceptions of individuals with CVD, achieving superior performance compared with the state-of-the-arts. The source code is available at <uri>https://github.com/xsx-fzu/ColorAssist</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5658-5671"},"PeriodicalIF":13.7000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ColorAssist: Perception-Based Recoloring for Color Vision Deficiency Compensation\",\"authors\":\"Liqun Lin;Shangxi Xie;Yanting Wang;Bolin Chen;Ying Xue;Xiahai Zhuang;Tiesong Zhao\",\"doi\":\"10.1109/TIP.2025.3602643\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image enhancement methods have been widely studied to improve the visual quality of diverse images, implicitly assuming that all human observers have normal vision. However, a large population around the world suffers from Color Vision Deficiency (CVD). Enhancing images to compensate for their perceptions remains a challenging issue. Existing CVD compensation methods have two drawbacks: first, the available datasets and validations have not been rigorously tested by CVD individuals; second, these methods struggle to strike an optimal balance between contrast enhancement and naturalness preservation, which often results in suboptimal outcomes for individuals with CVD. To address these issues, we develop the first large-scale, CVD-individual-labeled dataset called FZU-CVDSet and a CVD-friendly recoloring algorithm called ColorAssist. In particular, we design a perception-guided feature extraction module and a perception-guided diffusion transformer module that jointly achieve efficient image recoloring for individuals with CVD. Comprehensive experiments on both FZU-CVDSet and subjective tests in hospitals demonstrate that the proposed ColorAssist closely aligns with the visual perceptions of individuals with CVD, achieving superior performance compared with the state-of-the-arts. The source code is available at <uri>https://github.com/xsx-fzu/ColorAssist</uri>.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"34 \",\"pages\":\"5658-5671\"},\"PeriodicalIF\":13.7000,\"publicationDate\":\"2025-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11146425/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11146425/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ColorAssist: Perception-Based Recoloring for Color Vision Deficiency Compensation
Image enhancement methods have been widely studied to improve the visual quality of diverse images, implicitly assuming that all human observers have normal vision. However, a large population around the world suffers from Color Vision Deficiency (CVD). Enhancing images to compensate for their perceptions remains a challenging issue. Existing CVD compensation methods have two drawbacks: first, the available datasets and validations have not been rigorously tested by CVD individuals; second, these methods struggle to strike an optimal balance between contrast enhancement and naturalness preservation, which often results in suboptimal outcomes for individuals with CVD. To address these issues, we develop the first large-scale, CVD-individual-labeled dataset called FZU-CVDSet and a CVD-friendly recoloring algorithm called ColorAssist. In particular, we design a perception-guided feature extraction module and a perception-guided diffusion transformer module that jointly achieve efficient image recoloring for individuals with CVD. Comprehensive experiments on both FZU-CVDSet and subjective tests in hospitals demonstrate that the proposed ColorAssist closely aligns with the visual perceptions of individuals with CVD, achieving superior performance compared with the state-of-the-arts. The source code is available at https://github.com/xsx-fzu/ColorAssist.