{"title":"Diff-Retinex++:用于弱光图像增强的retinex驱动增强扩散模型","authors":"Xunpeng Yi;Han Xu;Hao Zhang;Linfeng Tang;Jiayi Ma","doi":"10.1109/TPAMI.2025.3563612","DOIUrl":null,"url":null,"abstract":"This paper proposes a Retinex-driven reinforced diffusion model for low-light image enhancement, termed Diff-Retinex++, to address various degradations caused by low light. Our main approach integrates the diffusion model with Retinex-driven restoration to achieve physically-inspired generative enhancement, making it a pioneering effort. To be detailed, Diff-Retinex++ consists of two-stage view modules, including the Denoising Diffusion Model (DDM), and the Retinex-Driven Mixture of Experts Model (RMoE). First, DDM treats low-light image enhancement as one type of image generation task, benefiting from the powerful generation ability of diffusion model to handle the enhancement. Second, we design the Retinex theory into the plug-and-play supervision attention module. It leverages the latent features in the backbone and knowledge distillation to learn Retinex rules, and further regulates these latent features through the attention mechanism. In this way, it couples the relationship between Retinex decomposition and image enhancement in a new view, achieving dual improvement. In addition, the Low-Light Mixture of Experts preserves the vividness of the diffusion model and fidelity of the Retinex-driven restoration to the greatest extent. Ultimately, the iteration of DDM and RMoE achieves the goal of Retinex-driven reinforced diffusion model. Extensive experiments conducted on real-world low-light datasets qualitatively and quantitatively demonstrate the effectiveness, superiority, and generalization of the proposed method.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 8","pages":"6823-6841"},"PeriodicalIF":18.6000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Diff-Retinex++: Retinex-Driven Reinforced Diffusion Model for Low-Light Image Enhancement\",\"authors\":\"Xunpeng Yi;Han Xu;Hao Zhang;Linfeng Tang;Jiayi Ma\",\"doi\":\"10.1109/TPAMI.2025.3563612\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a Retinex-driven reinforced diffusion model for low-light image enhancement, termed Diff-Retinex++, to address various degradations caused by low light. Our main approach integrates the diffusion model with Retinex-driven restoration to achieve physically-inspired generative enhancement, making it a pioneering effort. To be detailed, Diff-Retinex++ consists of two-stage view modules, including the Denoising Diffusion Model (DDM), and the Retinex-Driven Mixture of Experts Model (RMoE). First, DDM treats low-light image enhancement as one type of image generation task, benefiting from the powerful generation ability of diffusion model to handle the enhancement. Second, we design the Retinex theory into the plug-and-play supervision attention module. It leverages the latent features in the backbone and knowledge distillation to learn Retinex rules, and further regulates these latent features through the attention mechanism. In this way, it couples the relationship between Retinex decomposition and image enhancement in a new view, achieving dual improvement. In addition, the Low-Light Mixture of Experts preserves the vividness of the diffusion model and fidelity of the Retinex-driven restoration to the greatest extent. Ultimately, the iteration of DDM and RMoE achieves the goal of Retinex-driven reinforced diffusion model. Extensive experiments conducted on real-world low-light datasets qualitatively and quantitatively demonstrate the effectiveness, superiority, and generalization of the proposed method.\",\"PeriodicalId\":94034,\"journal\":{\"name\":\"IEEE transactions on pattern analysis and machine intelligence\",\"volume\":\"47 8\",\"pages\":\"6823-6841\"},\"PeriodicalIF\":18.6000,\"publicationDate\":\"2025-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on pattern analysis and machine intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10974676/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10974676/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Diff-Retinex++: Retinex-Driven Reinforced Diffusion Model for Low-Light Image Enhancement
This paper proposes a Retinex-driven reinforced diffusion model for low-light image enhancement, termed Diff-Retinex++, to address various degradations caused by low light. Our main approach integrates the diffusion model with Retinex-driven restoration to achieve physically-inspired generative enhancement, making it a pioneering effort. To be detailed, Diff-Retinex++ consists of two-stage view modules, including the Denoising Diffusion Model (DDM), and the Retinex-Driven Mixture of Experts Model (RMoE). First, DDM treats low-light image enhancement as one type of image generation task, benefiting from the powerful generation ability of diffusion model to handle the enhancement. Second, we design the Retinex theory into the plug-and-play supervision attention module. It leverages the latent features in the backbone and knowledge distillation to learn Retinex rules, and further regulates these latent features through the attention mechanism. In this way, it couples the relationship between Retinex decomposition and image enhancement in a new view, achieving dual improvement. In addition, the Low-Light Mixture of Experts preserves the vividness of the diffusion model and fidelity of the Retinex-driven restoration to the greatest extent. Ultimately, the iteration of DDM and RMoE achieves the goal of Retinex-driven reinforced diffusion model. Extensive experiments conducted on real-world low-light datasets qualitatively and quantitatively demonstrate the effectiveness, superiority, and generalization of the proposed method.