Huanjing Yue , Changan He , Longhan Wang , Biting Yu , Xuanwu Yin , Zhenyu Zhou , Jingyu Yang
{"title":"Staggered HDR video reconstruction with a real-world benchmark dataset for night scenes","authors":"Huanjing Yue , Changan He , Longhan Wang , Biting Yu , Xuanwu Yin , Zhenyu Zhou , Jingyu Yang","doi":"10.1016/j.displa.2025.103029","DOIUrl":null,"url":null,"abstract":"<div><div>Capturing night scenes with full visibility is attractive. Due to the limited dynamic range of camera sensors, we cannot record details for both light-source regions and dark regions. A practical solution is utilizing multi-exposure fusion to get high dynamic range (HDR) results. However, the lack of real-world NightHDR dataset hinders the development of deep learning based NightHDR video reconstruction. To solve this problem, we first construct a real-world NightHDR video dataset, which contains 57 LDR-HDR video pairs captured under night scenes in both raw and sRGB formats, where the LDR frame includes short and long exposures. Different from previous alternating exposure based or frame based HDR video reconstruction, we turn to staggered HDR reconstruction, which is more applicable in real scenarios. Correspondingly, we propose an efficient NightHDRNet, which contains single-exposure enhancement (stage I), two-exposure fusion (stage II), and two stage selective fusion modules. In this way, our network can improve the dynamic range and reduce ghosting artifacts. Extensive experiments show that our approach outperforms state-of-the-art methods qualitatively and quantitatively. <em>We will release our dataset and code after the acceptance of this work.</em></div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103029"},"PeriodicalIF":3.7000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225000666","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Capturing night scenes with full visibility is attractive. Due to the limited dynamic range of camera sensors, we cannot record details for both light-source regions and dark regions. A practical solution is utilizing multi-exposure fusion to get high dynamic range (HDR) results. However, the lack of real-world NightHDR dataset hinders the development of deep learning based NightHDR video reconstruction. To solve this problem, we first construct a real-world NightHDR video dataset, which contains 57 LDR-HDR video pairs captured under night scenes in both raw and sRGB formats, where the LDR frame includes short and long exposures. Different from previous alternating exposure based or frame based HDR video reconstruction, we turn to staggered HDR reconstruction, which is more applicable in real scenarios. Correspondingly, we propose an efficient NightHDRNet, which contains single-exposure enhancement (stage I), two-exposure fusion (stage II), and two stage selective fusion modules. In this way, our network can improve the dynamic range and reduce ghosting artifacts. Extensive experiments show that our approach outperforms state-of-the-art methods qualitatively and quantitatively. We will release our dataset and code after the acceptance of this work.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.