Detail Enhanced Multi-Exposure Image Fusion Based On Edge Preserving Filters

Q4 Computer Science
Harbinder Singh
{"title":"Detail Enhanced Multi-Exposure Image Fusion Based On Edge Preserving Filters","authors":"Harbinder Singh","doi":"10.5565/REV/ELCVIA.1126","DOIUrl":null,"url":null,"abstract":"Recent computational photography techniques play a significant role to overcome the limitation of standard digital cameras for handling wide dynamic range of real-world scenes contain brightly and poorly illuminated areas. In many of such techniques [1,2,3], it is often desirable to fuse details from images captured at different exposure settings, while avoiding visual artifacts. One such technique is High Dynamic Range (HDR) imaging that provides a solution to recover radiance maps from photographs taken with conventional imaging equipment. The process of HDR image composition needs the knowledge of exposure times and Camera Response Function (CRF), which is required to linearize the image data before combining Low Dynamic Range (LDR) exposures into HDR image. One of the long-standing challenges in HDR imaging technology is the limited Dynamic Range (DR) of conventional display devices and printing technology. Due to which these devices are unable to reproduce full DR. Although DR can be reduced by using a tone-mapping, but this comes at an unavoidable trade-off with increased computational cost. Therefore, it is desirable to maximize information content of the synthesized scene from a set of multi-exposure images without computing HDR radiance map and tone-mapping.This research attempts to develop a novel detail enhanced multi-exposure image fusion approach based on texture features, which exploits the edge preserving and intra-region smoothing property of nonlinear diffusion filters based on Partial Differential Equations (PDE). With the captured multi-exposure image series, we first decompose images into Base Layers (BLs) and Detail Layers (DLs) to extract sharp details and fine details, respectively. The magnitude of the gradient of the image intensity is utilized to encourage smoothness at homogeneous regions in preference to inhomogeneous regions. In the next step texture features of the BL to generate a decision mask (i.e., local range) have been considered that guide the fusion of BLs in multi-resolution fashion. Finally, well-exposed fused image is obtained that combines fused BL and the DL at each scale across all the input exposures. The combination of edge-preserving filters with Laplacian pyramid is shown to lead to texture detail enhancement in the fused image.Furthermore, Non-linear adaptive filter is employed for BL and DL decomposition that has better response near strong edges. The texture details are then added to the fused BL to reconstruct a detail enhanced LDR version of the image. This leads to an increased robustness of the texture details while at the same time avoiding gradient reversal artifacts near strong edges that may appear in fused image after DL enhancement.Finally, we propose a novel technique for exposure fusion in which Weighted Least Squares (WLS) optimization framework is utilized for weight map refinement of BLs and DLs, which lead to a new simple weighted average fusion framework. Computationally simple texture features (i.e. DL) and color saturation measure are preferred for quickly generating weight maps to control the contribution from an input set of multi-exposure images. Instead of employing intermediate HDR reconstruction and tone-mapping steps, well-exposed fused image is generated for displaying on conventional display devices. Simulation results are compared with a number of existing single resolution and multi-resolution techniques to show the benefits of the proposed scheme for the variety of cases. Moreover, the approaches proposed in this thesis are effective for blending flash and no-flash image pair, and multi-focus images, that is, input images photographed with and without flash, and images focused on different targets, respectively. A further advantage of the present technique is that it is well suited for detail enhancement in the fused image.","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":"86 1","pages":"13-16"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronic Letters on Computer Vision and Image Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5565/REV/ELCVIA.1126","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 6

Abstract

Recent computational photography techniques play a significant role to overcome the limitation of standard digital cameras for handling wide dynamic range of real-world scenes contain brightly and poorly illuminated areas. In many of such techniques [1,2,3], it is often desirable to fuse details from images captured at different exposure settings, while avoiding visual artifacts. One such technique is High Dynamic Range (HDR) imaging that provides a solution to recover radiance maps from photographs taken with conventional imaging equipment. The process of HDR image composition needs the knowledge of exposure times and Camera Response Function (CRF), which is required to linearize the image data before combining Low Dynamic Range (LDR) exposures into HDR image. One of the long-standing challenges in HDR imaging technology is the limited Dynamic Range (DR) of conventional display devices and printing technology. Due to which these devices are unable to reproduce full DR. Although DR can be reduced by using a tone-mapping, but this comes at an unavoidable trade-off with increased computational cost. Therefore, it is desirable to maximize information content of the synthesized scene from a set of multi-exposure images without computing HDR radiance map and tone-mapping.This research attempts to develop a novel detail enhanced multi-exposure image fusion approach based on texture features, which exploits the edge preserving and intra-region smoothing property of nonlinear diffusion filters based on Partial Differential Equations (PDE). With the captured multi-exposure image series, we first decompose images into Base Layers (BLs) and Detail Layers (DLs) to extract sharp details and fine details, respectively. The magnitude of the gradient of the image intensity is utilized to encourage smoothness at homogeneous regions in preference to inhomogeneous regions. In the next step texture features of the BL to generate a decision mask (i.e., local range) have been considered that guide the fusion of BLs in multi-resolution fashion. Finally, well-exposed fused image is obtained that combines fused BL and the DL at each scale across all the input exposures. The combination of edge-preserving filters with Laplacian pyramid is shown to lead to texture detail enhancement in the fused image.Furthermore, Non-linear adaptive filter is employed for BL and DL decomposition that has better response near strong edges. The texture details are then added to the fused BL to reconstruct a detail enhanced LDR version of the image. This leads to an increased robustness of the texture details while at the same time avoiding gradient reversal artifacts near strong edges that may appear in fused image after DL enhancement.Finally, we propose a novel technique for exposure fusion in which Weighted Least Squares (WLS) optimization framework is utilized for weight map refinement of BLs and DLs, which lead to a new simple weighted average fusion framework. Computationally simple texture features (i.e. DL) and color saturation measure are preferred for quickly generating weight maps to control the contribution from an input set of multi-exposure images. Instead of employing intermediate HDR reconstruction and tone-mapping steps, well-exposed fused image is generated for displaying on conventional display devices. Simulation results are compared with a number of existing single resolution and multi-resolution techniques to show the benefits of the proposed scheme for the variety of cases. Moreover, the approaches proposed in this thesis are effective for blending flash and no-flash image pair, and multi-focus images, that is, input images photographed with and without flash, and images focused on different targets, respectively. A further advantage of the present technique is that it is well suited for detail enhancement in the fused image.
基于边缘保持滤波器的细节增强多曝光图像融合
最近的计算摄影技术在克服标准数码相机处理包含明亮和光线不足区域的大动态范围真实场景的限制方面发挥了重要作用。在许多这样的技术中[1,2,3],通常需要融合在不同曝光设置下捕获的图像的细节,同时避免视觉伪影。其中一项技术是高动态范围(HDR)成像,它提供了一种从传统成像设备拍摄的照片中恢复辐射图的解决方案。HDR图像合成过程需要了解曝光时间和相机响应函数(Camera Response Function, CRF),在将低动态范围(Low Dynamic Range, LDR)曝光组合成HDR图像之前,需要对图像数据进行线性化处理。HDR成像技术长期面临的挑战之一是传统显示设备和打印技术的动态范围(DR)有限。由于这些设备无法重现完整的DR,虽然DR可以通过使用色调映射来减少,但这是一个不可避免的权衡,增加了计算成本。因此,希望在不计算HDR亮度映射和色调映射的情况下,从一组多曝光图像中最大化合成场景的信息内容。本研究利用基于偏微分方程(PDE)的非线性扩散滤波器的边缘保持和区域内平滑特性,提出了一种基于纹理特征的细节增强多曝光图像融合方法。对捕获的多曝光图像序列,首先将图像分解为基础层(Base layer, BLs)和细节层(Detail layer, DLs),分别提取清晰细节和精细细节。梯度的图像强度的大小被用来鼓励平滑在均匀区域优先于非均匀区域。在下一步中,考虑了生成决策蒙版(即局部范围)的BL纹理特征,以指导多分辨率方式的BL融合。最后,在所有输入曝光的各个尺度上,将融合后的BL和DL结合在一起,得到曝光良好的融合图像。将边缘保持滤波器与拉普拉斯金字塔相结合,可以增强融合图像的纹理细节。此外,采用非线性自适应滤波器对BL和DL进行分解,在强边缘附近有更好的响应。然后将纹理细节添加到融合的BL中以重建图像的细节增强LDR版本。这增加了纹理细节的鲁棒性,同时避免了深度学习增强后融合图像中可能出现的强边缘附近的梯度反转伪影。最后,我们提出了一种新的曝光融合技术,该技术利用加权最小二乘(WLS)优化框架对BLs和dl进行权重图细化,从而得到一种新的简单加权平均融合框架。计算简单的纹理特征(即DL)和颜色饱和度测量是快速生成权重图以控制多曝光图像输入集的贡献的首选方法。它不采用中间HDR重建和色调映射步骤,而是生成曝光良好的融合图像,以便在传统显示设备上显示。仿真结果与许多现有的单分辨率和多分辨率技术进行了比较,以显示所提出的方案在各种情况下的优势。此外,本文提出的方法对于混合有闪光灯和无闪光灯图像对,以及多焦点图像(即有闪光灯和无闪光灯的输入图像,以及分别聚焦在不同目标上的图像)是有效的。本技术的另一个优点是它非常适合于融合图像中的细节增强。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Electronic Letters on Computer Vision and Image Analysis
Electronic Letters on Computer Vision and Image Analysis Computer Science-Computer Vision and Pattern Recognition
CiteScore
2.50
自引率
0.00%
发文量
19
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信