{"title":"LoRaDIP: Low-Rank Adaptation With Deep Image Prior for Generative Low-Light Image Enhancement","authors":"Zunjin Zhao;Daming Shi","doi":"10.1109/TAI.2024.3499950","DOIUrl":null,"url":null,"abstract":"This article presents LoRaDIP, a novel low-light image enhancement (LLIE) model based on deep image priors (DIPs). While DIP-based enhancement models are known for their zero-shot learning, their expensive computational cost remains a challenge. In addressing this issue, our proposed LoRaDIP introduces a low-rank adaptation technique, significantly reducing computational expenses without compromising performance. The contributions of this work are threefold. First, we eliminate the need for estimating initial illumination and reflectance, opting instead to directly estimate the illumination map from the observed image in a generative fashion. The illumination is parameterized by a DIP network. Second, considering the overparameterization of DIP networks, we introduce a low-rank adaptation technique to decrease the number of trainable parameters, thereby reducing computational demands. Third, differing from the existing DIP-based models that rely on a preset fixed number of iterations to halt the optimization process of Retinex decomposition, we propose an automatic stopping criterion based on stable rank, preventing unnecessary iterations. LoRaDIP not only inherits the advantage of requiring only the single input image but also exhibits reduced computational costs while maintaining or even surpassing the performance of state-of-the-art models.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 4","pages":"909-920"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10754638/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This article presents LoRaDIP, a novel low-light image enhancement (LLIE) model based on deep image priors (DIPs). While DIP-based enhancement models are known for their zero-shot learning, their expensive computational cost remains a challenge. In addressing this issue, our proposed LoRaDIP introduces a low-rank adaptation technique, significantly reducing computational expenses without compromising performance. The contributions of this work are threefold. First, we eliminate the need for estimating initial illumination and reflectance, opting instead to directly estimate the illumination map from the observed image in a generative fashion. The illumination is parameterized by a DIP network. Second, considering the overparameterization of DIP networks, we introduce a low-rank adaptation technique to decrease the number of trainable parameters, thereby reducing computational demands. Third, differing from the existing DIP-based models that rely on a preset fixed number of iterations to halt the optimization process of Retinex decomposition, we propose an automatic stopping criterion based on stable rank, preventing unnecessary iterations. LoRaDIP not only inherits the advantage of requiring only the single input image but also exhibits reduced computational costs while maintaining or even surpassing the performance of state-of-the-art models.