Efficient Multi-exposure Image Fusion via Filter-dominated Fusion and Gradient-driven Unsupervised Learning

Kaiwen Zheng, Jie Huang, Huikang Yu, Fengmei Zhao
{"title":"Efficient Multi-exposure Image Fusion via Filter-dominated Fusion and Gradient-driven Unsupervised Learning","authors":"Kaiwen Zheng, Jie Huang, Huikang Yu, Fengmei Zhao","doi":"10.1109/CVPRW59228.2023.00281","DOIUrl":null,"url":null,"abstract":"Multi exposure image fusion (MEF) aims to produce images with a high dynamic range of visual perception by integrating complementary information from different exposure levels, bypassing common sensors’ physical limits. Despite the marvelous progress made by deep learning-based methods, few considerations have been given to the innovation of fusion paradigms, leading to insufficient model capacity utilization. This paper proposes a novel filter prediction-dominated fusion paradigm toward a simple yet effective MEF. Precisely, we predict a series of spatial-adaptive filters conditioned on the hierarchically represented features to perform an image-level dynamic fusion. The proposed paradigm has the following merits over the previous: 1) it circumvents the risk of information loss arising from the implicit encoding and decoding processes within the neural network, and 2) it better integrates local information to obtain better continuous spatial representations than the weight map-based paradigm. Furthermore, we propose a Gradient-driven Image Fidelity (GIF) loss for unsupervised MEF. Empowered by the exploitation of informative property in the gradient domain, GIF is able to implement a stable distortion-free optimization process. Experimental results demonstrate that our method achieves the best visual performance compared to the state-of-the-art while achieving an almost 30% improvement in inference time. The code is available at https://github.com/keviner1/FFMEF.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW59228.2023.00281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Multi exposure image fusion (MEF) aims to produce images with a high dynamic range of visual perception by integrating complementary information from different exposure levels, bypassing common sensors’ physical limits. Despite the marvelous progress made by deep learning-based methods, few considerations have been given to the innovation of fusion paradigms, leading to insufficient model capacity utilization. This paper proposes a novel filter prediction-dominated fusion paradigm toward a simple yet effective MEF. Precisely, we predict a series of spatial-adaptive filters conditioned on the hierarchically represented features to perform an image-level dynamic fusion. The proposed paradigm has the following merits over the previous: 1) it circumvents the risk of information loss arising from the implicit encoding and decoding processes within the neural network, and 2) it better integrates local information to obtain better continuous spatial representations than the weight map-based paradigm. Furthermore, we propose a Gradient-driven Image Fidelity (GIF) loss for unsupervised MEF. Empowered by the exploitation of informative property in the gradient domain, GIF is able to implement a stable distortion-free optimization process. Experimental results demonstrate that our method achieves the best visual performance compared to the state-of-the-art while achieving an almost 30% improvement in inference time. The code is available at https://github.com/keviner1/FFMEF.
基于滤波主导融合和梯度驱动无监督学习的高效多曝光图像融合
多曝光图像融合(MEF)旨在通过整合来自不同曝光水平的互补信息,绕过普通传感器的物理限制,产生具有高动态范围视觉感知的图像。尽管基于深度学习的方法取得了惊人的进展,但很少考虑融合范式的创新,导致模型容量利用率不足。本文提出了一种新的滤波器预测主导的融合范式,以实现简单而有效的MEF。准确地说,我们预测了一系列空间自适应滤波器,这些滤波器以分层表示的特征为条件,以执行图像级动态融合。与以往的范式相比,本文提出的范式具有以下优点:1)规避了神经网络内隐式编码和解码过程带来的信息丢失风险;2)与基于权重映射的范式相比,它更好地集成了局部信息以获得更好的连续空间表示。此外,我们为无监督MEF提出了梯度驱动的图像保真度(GIF)损失。利用梯度域的信息属性,GIF能够实现稳定的无失真优化过程。实验结果表明,我们的方法达到了最佳的视觉性能,同时在推理时间上提高了近30%。代码可在https://github.com/keviner1/FFMEF上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信