{"title":"A multi-exposure image fusion using adaptive color dissimilarity and dynamic equalization techniques","authors":"Jishnu C.R., Vishnukumar S.","doi":"10.1016/j.jvcir.2024.104350","DOIUrl":null,"url":null,"abstract":"<div><div>In the domain of image processing, Multi-Exposure Image Fusion (MEF) emerges as a crucial technique for developing high dynamic range (HDR) representations from fusing sequences of low dynamic range images. Conventional fusion methods often suffer from shortcomings such as detail loss, edge artifacts, and color inconsistencies, thereby compromising the quality of the fused output which is further diminished with extremely exposed and limited inputs. While there have been a few efforts to conduct fusion on limited and impaired static input images, there has been no exploration into the fusion of dynamic image sets. This paper proposes an effective MEF approach that operates on a minimum of two extremely exposed, limited datasets of both static and dynamic scenes. The approach initiates with categorizing input images into under-exposed and over-exposed categories based on lighting levels, subsequently applying tailored exposure correction strategies. Through iterative refinement and selection of optimally exposed variant, we construct an advanced intermediate stack, upon which fusion is performed by a pyramidal fusion technique. The method relies on adaptive well-exposedness and color gradient to develop weight maps for pyramidal fusion. The initial weights are refined using a Gaussian filter and this results in the creation of a seamlessly fused image with expanded dynamic range. Additionally, for dynamic imagery, we propose an adaptive color dissimilarity and dynamic equalization to reduce ghosting artifacts. Comparative assessments against existing methodologies, both visually and empirically confirms the superior performance of the proposed model.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"107 ","pages":"Article 104350"},"PeriodicalIF":2.6000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320324003067","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In the domain of image processing, Multi-Exposure Image Fusion (MEF) emerges as a crucial technique for developing high dynamic range (HDR) representations from fusing sequences of low dynamic range images. Conventional fusion methods often suffer from shortcomings such as detail loss, edge artifacts, and color inconsistencies, thereby compromising the quality of the fused output which is further diminished with extremely exposed and limited inputs. While there have been a few efforts to conduct fusion on limited and impaired static input images, there has been no exploration into the fusion of dynamic image sets. This paper proposes an effective MEF approach that operates on a minimum of two extremely exposed, limited datasets of both static and dynamic scenes. The approach initiates with categorizing input images into under-exposed and over-exposed categories based on lighting levels, subsequently applying tailored exposure correction strategies. Through iterative refinement and selection of optimally exposed variant, we construct an advanced intermediate stack, upon which fusion is performed by a pyramidal fusion technique. The method relies on adaptive well-exposedness and color gradient to develop weight maps for pyramidal fusion. The initial weights are refined using a Gaussian filter and this results in the creation of a seamlessly fused image with expanded dynamic range. Additionally, for dynamic imagery, we propose an adaptive color dissimilarity and dynamic equalization to reduce ghosting artifacts. Comparative assessments against existing methodologies, both visually and empirically confirms the superior performance of the proposed model.
期刊介绍:
The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.