A dense optical flow map-guided convolutional neural network for multi-exposure image fusion in dynamic scenes

IF 5 2区 物理与天体物理 Q1 OPTICS
Minjie Wan , Bolin Chen , Yunkai Xu , Pengqiang Ge , Xiaofang Kong , Guohua Gu , Qian Chen
{"title":"A dense optical flow map-guided convolutional neural network for multi-exposure image fusion in dynamic scenes","authors":"Minjie Wan ,&nbsp;Bolin Chen ,&nbsp;Yunkai Xu ,&nbsp;Pengqiang Ge ,&nbsp;Xiaofang Kong ,&nbsp;Guohua Gu ,&nbsp;Qian Chen","doi":"10.1016/j.optlastec.2025.114086","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, multi-exposure image fusion-based high dynamic range (HDR) imaging technology has attracted widespread attention in optical instrumentation and measurement-based fields such as 3D measurement, industrial welding, and biological testing. However, existing methods are prone to producing ghosting artifacts when handling dynamic scenes, which results in significant limitations in their applicability. To address this issue, we propose a dense optical flow map-guided convolutional neural network for multi-exposure image fusion method in dynamic scenes, called DOFM-HDRNet, in this paper. This network leverages an attention module guided by dense optical flow maps between non-reference and reference images, which helps to highlight and retain complementary features of the reference image during feature extraction, while suppressing motion and overexposed regions and enhancing image details. Additionally, the fusion module employs dilated residual dense blocks (DRDBs) to expand the receptive field, enabling more accurate estimation of regions with missing details caused by motion or overexposure. Qualitative and quantitative experiments demonstrate that, compared with existing methods, the proposed approach achieves improvements of 0.40 dB in PSNR-u and 0.048 % in HDR-VDP-2 on the Kalantari dataset, and exhibits strong suppression capability of ghosting artifacts in HDR imaging for dynamic scenes.</div></div>","PeriodicalId":19511,"journal":{"name":"Optics and Laser Technology","volume":"192 ","pages":"Article 114086"},"PeriodicalIF":5.0000,"publicationDate":"2025-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Laser Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0030399225016779","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, multi-exposure image fusion-based high dynamic range (HDR) imaging technology has attracted widespread attention in optical instrumentation and measurement-based fields such as 3D measurement, industrial welding, and biological testing. However, existing methods are prone to producing ghosting artifacts when handling dynamic scenes, which results in significant limitations in their applicability. To address this issue, we propose a dense optical flow map-guided convolutional neural network for multi-exposure image fusion method in dynamic scenes, called DOFM-HDRNet, in this paper. This network leverages an attention module guided by dense optical flow maps between non-reference and reference images, which helps to highlight and retain complementary features of the reference image during feature extraction, while suppressing motion and overexposed regions and enhancing image details. Additionally, the fusion module employs dilated residual dense blocks (DRDBs) to expand the receptive field, enabling more accurate estimation of regions with missing details caused by motion or overexposure. Qualitative and quantitative experiments demonstrate that, compared with existing methods, the proposed approach achieves improvements of 0.40 dB in PSNR-u and 0.048 % in HDR-VDP-2 on the Kalantari dataset, and exhibits strong suppression capability of ghosting artifacts in HDR imaging for dynamic scenes.
基于密集光流映射的卷积神经网络在动态场景下的多曝光图像融合
近年来,基于多曝光图像融合的高动态范围(HDR)成像技术在三维测量、工业焊接、生物检测等光学仪器和基于测量的领域受到了广泛关注。然而,现有的方法在处理动态场景时容易产生重影伪影,这对其适用性有很大的限制。为了解决这一问题,本文提出了一种用于动态场景中多曝光图像融合的密集光流映射引导卷积神经网络,称为dfm - hdrnet。该网络利用非参考图像和参考图像之间密集光流图引导的注意力模块,有助于在特征提取过程中突出和保留参考图像的互补特征,同时抑制运动和过度曝光区域并增强图像细节。此外,融合模块采用扩展残余密集块(drdb)来扩展接受野,从而能够更准确地估计由运动或过度曝光引起的细节缺失区域。定性和定量实验表明,与现有方法相比,该方法在Kalantari数据集上的PSNR-u提高了0.40 dB, HDR- vdp -2提高了0.048%,并且在动态场景的HDR成像中表现出较强的伪影抑制能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.50
自引率
10.00%
发文量
1060
审稿时长
3.4 months
期刊介绍: Optics & Laser Technology aims to provide a vehicle for the publication of a broad range of high quality research and review papers in those fields of scientific and engineering research appertaining to the development and application of the technology of optics and lasers. Papers describing original work in these areas are submitted to rigorous refereeing prior to acceptance for publication. The scope of Optics & Laser Technology encompasses, but is not restricted to, the following areas: •development in all types of lasers •developments in optoelectronic devices and photonics •developments in new photonics and optical concepts •developments in conventional optics, optical instruments and components •techniques of optical metrology, including interferometry and optical fibre sensors •LIDAR and other non-contact optical measurement techniques, including optical methods in heat and fluid flow •applications of lasers to materials processing, optical NDT display (including holography) and optical communication •research and development in the field of laser safety including studies of hazards resulting from the applications of lasers (laser safety, hazards of laser fume) •developments in optical computing and optical information processing •developments in new optical materials •developments in new optical characterization methods and techniques •developments in quantum optics •developments in light assisted micro and nanofabrication methods and techniques •developments in nanophotonics and biophotonics •developments in imaging processing and systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信