Hao Luo;Baoliang Chen;Lingyu Zhu;Peilin Chen;Shiqi Wang
{"title":"RCNet: Deep Recurrent Collaborative Network for Multi-View Low-Light Image Enhancement","authors":"Hao Luo;Baoliang Chen;Lingyu Zhu;Peilin Chen;Shiqi Wang","doi":"10.1109/TMM.2024.3521760","DOIUrl":null,"url":null,"abstract":"Scene observation from multiple perspectives brings a more comprehensive visual experience. However, acquiring multiple views in the dark causes highly correlated views alienated, making it challenging to improve scene understanding with auxiliary views. Recent single image-based enhancement methods may not provide consistently desirable restoration performance for all views due to ignoring potential feature correspondence among views. To alleviate this issue, we make the first attempt to investigate multi-view low-light image enhancement. First, we construct a new dataset called Multi-View Low-light Triplets (MVLT), including 1,860 pairs of triple images with large illumination ranges and wide noise distribution. Each triplet is equipped with three viewpoints towards the same scene. Second, we propose a multi-view enhancement framework based on the Recurrent Collaborative Network (RCNet). To benefit from similar texture correspondence across views, we design the recurrent feature enhancement, alignment, and fusion (ReEAF) module, where intra-view feature enhancement (Intra-view EN) followed by inter-view feature alignment and fusion (Inter-view AF) is performed to model intra-view and inter-view feature propagation via multi-view collaboration. Additionally, two modules from enhancement to alignment (E2A) and alignment to enhancement (A2E) are developed to enable interactions between Intra-view EN and Inter-view AF, utilizing attentive feature weighting and sampling for enhancement and alignment. Experimental results demonstrate our RCNet significantly outperforms other state-of-the-art methods.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"2001-2014"},"PeriodicalIF":8.4000,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10820442/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Scene observation from multiple perspectives brings a more comprehensive visual experience. However, acquiring multiple views in the dark causes highly correlated views alienated, making it challenging to improve scene understanding with auxiliary views. Recent single image-based enhancement methods may not provide consistently desirable restoration performance for all views due to ignoring potential feature correspondence among views. To alleviate this issue, we make the first attempt to investigate multi-view low-light image enhancement. First, we construct a new dataset called Multi-View Low-light Triplets (MVLT), including 1,860 pairs of triple images with large illumination ranges and wide noise distribution. Each triplet is equipped with three viewpoints towards the same scene. Second, we propose a multi-view enhancement framework based on the Recurrent Collaborative Network (RCNet). To benefit from similar texture correspondence across views, we design the recurrent feature enhancement, alignment, and fusion (ReEAF) module, where intra-view feature enhancement (Intra-view EN) followed by inter-view feature alignment and fusion (Inter-view AF) is performed to model intra-view and inter-view feature propagation via multi-view collaboration. Additionally, two modules from enhancement to alignment (E2A) and alignment to enhancement (A2E) are developed to enable interactions between Intra-view EN and Inter-view AF, utilizing attentive feature weighting and sampling for enhancement and alignment. Experimental results demonstrate our RCNet significantly outperforms other state-of-the-art methods.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.