{"title":"Effective multi-scale enhancement fusion method for low-light images based on interest-area perception OCTM and “pixel healthiness” evaluation","authors":"Yi-lun Wang, Yi-zheng Lang, Yun-sheng Qian","doi":"10.1007/s00371-024-03554-5","DOIUrl":null,"url":null,"abstract":"<p>Low-light images suffer from low contrast and low dynamic range. However, most existing single-frame low-light image enhancement algorithms are not good enough in terms of detail preservation and color expression and often have high algorithmic complexity. In this paper, we propose a single-frame low-light image fusion enhancement algorithm based on multi-scale contrast–tone mapping and \"pixel healthiness\" evaluation. It can adaptively adjust the exposure level of each region according to the principal component in the image and enhance contrast while preserving color and detail expression with low computational complexity. In particular, to find the most appropriate size of the artificial image sequence and the target enhancement range for each image, we propose a multi-scale parameter determination method based on the principal component analysis of the V-channel histogram to obtain the best enhancement while reducing unnecessary computations. In addition, a new \"pixel healthiness\" evaluation method based on global illuminance and local contrast is proposed for fast and efficient computation of weights for image fusion. Subjective evaluation and objective metrics show that our algorithm performs better than existing single-frame image algorithms and other fusion-based algorithms in enhancement, contrast, color expression, and detail preservation.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"44 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03554-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Low-light images suffer from low contrast and low dynamic range. However, most existing single-frame low-light image enhancement algorithms are not good enough in terms of detail preservation and color expression and often have high algorithmic complexity. In this paper, we propose a single-frame low-light image fusion enhancement algorithm based on multi-scale contrast–tone mapping and "pixel healthiness" evaluation. It can adaptively adjust the exposure level of each region according to the principal component in the image and enhance contrast while preserving color and detail expression with low computational complexity. In particular, to find the most appropriate size of the artificial image sequence and the target enhancement range for each image, we propose a multi-scale parameter determination method based on the principal component analysis of the V-channel histogram to obtain the best enhancement while reducing unnecessary computations. In addition, a new "pixel healthiness" evaluation method based on global illuminance and local contrast is proposed for fast and efficient computation of weights for image fusion. Subjective evaluation and objective metrics show that our algorithm performs better than existing single-frame image algorithms and other fusion-based algorithms in enhancement, contrast, color expression, and detail preservation.