Xingzheng Wang;Jiahao Lin;Kaiqiang Chen;Wenhao Huang;Zixuan Wang;Yuanlong Deng
{"title":"RLNet: Reshaping Learning Network for Accurate Low-Light Light Field Image Enhancement","authors":"Xingzheng Wang;Jiahao Lin;Kaiqiang Chen;Wenhao Huang;Zixuan Wang;Yuanlong Deng","doi":"10.1109/TCI.2025.3544070","DOIUrl":null,"url":null,"abstract":"Existing low-light light field (LF) image enhancement methods have difficulty in recovering image details due to underutilization of the 4D LF structure. Therefore, we propose a novel reshaping learning network named RLNet to better exploit the 4D LF structure. The main idea of the network is to divide and conquer, that is, to combine complementary traits from multiple 2D representations reshaped by LF data. Specifically, the network firstly learns local information and global information from neighboring views and all views, respectively. Then, based on the above extracted information, reshaping learning modules (RLMs) are employed to fully exploit the 4D LF structure, which reshape LF images into different 2D representations, and then simultaneously mine their corresponding traits. In addition, the network also integrates contrast attention modules (CAMs) to enhance image details. To validate our idea, we comprehensively evaluate the performance of our RLNet on three publicly available datasets with different illumination. Experimental results demonstrate the superior performance of our method. Our method achieves an average PSNR gain of 0.45 dB compared to current state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"421-434"},"PeriodicalIF":4.2000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10896796/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Existing low-light light field (LF) image enhancement methods have difficulty in recovering image details due to underutilization of the 4D LF structure. Therefore, we propose a novel reshaping learning network named RLNet to better exploit the 4D LF structure. The main idea of the network is to divide and conquer, that is, to combine complementary traits from multiple 2D representations reshaped by LF data. Specifically, the network firstly learns local information and global information from neighboring views and all views, respectively. Then, based on the above extracted information, reshaping learning modules (RLMs) are employed to fully exploit the 4D LF structure, which reshape LF images into different 2D representations, and then simultaneously mine their corresponding traits. In addition, the network also integrates contrast attention modules (CAMs) to enhance image details. To validate our idea, we comprehensively evaluate the performance of our RLNet on three publicly available datasets with different illumination. Experimental results demonstrate the superior performance of our method. Our method achieves an average PSNR gain of 0.45 dB compared to current state-of-the-art methods.
期刊介绍:
The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.