Staggered HDR video reconstruction with a real-world benchmark dataset for night scenes

IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Huanjing Yue , Changan He , Longhan Wang , Biting Yu , Xuanwu Yin , Zhenyu Zhou , Jingyu Yang
{"title":"Staggered HDR video reconstruction with a real-world benchmark dataset for night scenes","authors":"Huanjing Yue ,&nbsp;Changan He ,&nbsp;Longhan Wang ,&nbsp;Biting Yu ,&nbsp;Xuanwu Yin ,&nbsp;Zhenyu Zhou ,&nbsp;Jingyu Yang","doi":"10.1016/j.displa.2025.103029","DOIUrl":null,"url":null,"abstract":"<div><div>Capturing night scenes with full visibility is attractive. Due to the limited dynamic range of camera sensors, we cannot record details for both light-source regions and dark regions. A practical solution is utilizing multi-exposure fusion to get high dynamic range (HDR) results. However, the lack of real-world NightHDR dataset hinders the development of deep learning based NightHDR video reconstruction. To solve this problem, we first construct a real-world NightHDR video dataset, which contains 57 LDR-HDR video pairs captured under night scenes in both raw and sRGB formats, where the LDR frame includes short and long exposures. Different from previous alternating exposure based or frame based HDR video reconstruction, we turn to staggered HDR reconstruction, which is more applicable in real scenarios. Correspondingly, we propose an efficient NightHDRNet, which contains single-exposure enhancement (stage I), two-exposure fusion (stage II), and two stage selective fusion modules. In this way, our network can improve the dynamic range and reduce ghosting artifacts. Extensive experiments show that our approach outperforms state-of-the-art methods qualitatively and quantitatively. <em>We will release our dataset and code after the acceptance of this work.</em></div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103029"},"PeriodicalIF":3.7000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225000666","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Capturing night scenes with full visibility is attractive. Due to the limited dynamic range of camera sensors, we cannot record details for both light-source regions and dark regions. A practical solution is utilizing multi-exposure fusion to get high dynamic range (HDR) results. However, the lack of real-world NightHDR dataset hinders the development of deep learning based NightHDR video reconstruction. To solve this problem, we first construct a real-world NightHDR video dataset, which contains 57 LDR-HDR video pairs captured under night scenes in both raw and sRGB formats, where the LDR frame includes short and long exposures. Different from previous alternating exposure based or frame based HDR video reconstruction, we turn to staggered HDR reconstruction, which is more applicable in real scenarios. Correspondingly, we propose an efficient NightHDRNet, which contains single-exposure enhancement (stage I), two-exposure fusion (stage II), and two stage selective fusion modules. In this way, our network can improve the dynamic range and reduce ghosting artifacts. Extensive experiments show that our approach outperforms state-of-the-art methods qualitatively and quantitatively. We will release our dataset and code after the acceptance of this work.
交错HDR视频重建与现实世界的基准数据集的夜景
捕捉全能见度的夜景很有吸引力。由于相机传感器的动态范围有限,我们无法同时记录光源区域和黑暗区域的细节。一种实用的解决方案是利用多曝光融合来获得高动态范围(HDR)结果。然而,缺乏真实的NightHDR数据集阻碍了基于深度学习的NightHDR视频重建的发展。为了解决这个问题,我们首先构建了一个真实世界的NightHDR视频数据集,其中包含57个在原始和sRGB格式的夜景下捕获的LDR- hdr视频对,其中LDR帧包括短曝光和长曝光。与以往基于交替曝光或基于帧的HDR视频重建不同,我们采用了更适用于真实场景的交错HDR重建。相应地,我们提出了一个高效的NightHDRNet,它包含单曝光增强(阶段I),两曝光融合(阶段II)和两阶段选择性融合模块。这样,我们的网络可以提高动态范围,减少重影伪影。大量的实验表明,我们的方法在质量和数量上都优于最先进的方法。我们将在接受这项工作后发布我们的数据集和代码。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信