Ke Gu, Guangtao Zhai, Min Liu, Qi Xu, Xiaokang Yang, Jun Zhou, Wenjun Zhang
{"title":"Adaptive high-frequency clipping for improved image quality assessment","authors":"Ke Gu, Guangtao Zhai, Min Liu, Qi Xu, Xiaokang Yang, Jun Zhou, Wenjun Zhang","doi":"10.1109/VCIP.2013.6706347","DOIUrl":null,"url":null,"abstract":"It is widely known that the human visual system (HVS) applies multi-resolution analysis to the scenes we see. In fact, many of the best image quality metrics, e.g. MS-SSIM and IW-PSNR/SSIM are based on multi-scale models. However, in existing multi-scale type of image quality assessment (IQA) methods, the resolution levels are fixed. In this paper, we examine the problem of selecting optimal levels in the multi-resolution analysis to preprocess the image for perceptual quality assessment. According to the contrast sensitivity function (CSF) of the HVS, the sampling of visual information by the human eyes approximates a low-pass process. For images, the amount of information we can extract depends on the size of the image (or the object(s) inside) as well as the viewing distance. Therefore, we proposed a wavelet transform based adaptive high-frequency clipping (AHC) model to approximate the effective visual information that enters the HVS. After the high-frequency clipping, rather than processing separately on each level, we transform the filtered images back to their original resolutions for quality assessment. Extensive experimental results show that on various databases (LIVE, IVC, and Toyama-MICT), performance of existing image quality algorithms (PSNR and SSIM) can be substantially improved by applying the metrics to those AHC model processed images.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP.2013.6706347","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
It is widely known that the human visual system (HVS) applies multi-resolution analysis to the scenes we see. In fact, many of the best image quality metrics, e.g. MS-SSIM and IW-PSNR/SSIM are based on multi-scale models. However, in existing multi-scale type of image quality assessment (IQA) methods, the resolution levels are fixed. In this paper, we examine the problem of selecting optimal levels in the multi-resolution analysis to preprocess the image for perceptual quality assessment. According to the contrast sensitivity function (CSF) of the HVS, the sampling of visual information by the human eyes approximates a low-pass process. For images, the amount of information we can extract depends on the size of the image (or the object(s) inside) as well as the viewing distance. Therefore, we proposed a wavelet transform based adaptive high-frequency clipping (AHC) model to approximate the effective visual information that enters the HVS. After the high-frequency clipping, rather than processing separately on each level, we transform the filtered images back to their original resolutions for quality assessment. Extensive experimental results show that on various databases (LIVE, IVC, and Toyama-MICT), performance of existing image quality algorithms (PSNR and SSIM) can be substantially improved by applying the metrics to those AHC model processed images.