Liang Wei, Fangdong Chen, L. xilinx Wang, Xiaoyang Wu, Shiliang Pu
{"title":"Pixel-Wise Quantization for Image Compression","authors":"Liang Wei, Fangdong Chen, L. xilinx Wang, Xiaoyang Wu, Shiliang Pu","doi":"10.1109/DCC55655.2023.00058","DOIUrl":null,"url":null,"abstract":"This paper proposes a pixel-wise quantization (PWQ) method, which allows to reduce the quantization parameters (QPs) of simple pixels adaptively for the purpose of enhancing the subjective quality, since the distortions on simple pixels are more noticeable than those on complex pixels. For the pixel-wise prediction in Fig. 1, the pixel-wise reconstruction is implemented and the transformation is disabled, where the symbol “=” (or $^{\\prime \\prime}\\vee^{\\prime \\prime}/^{\\prime \\prime}\\gt^{\\prime \\prime}$) means the current prediction is the average value of the left and right reconstructions (or the upper/left reconstruction). And the PWQ method is applied in the same prediction direction and reconstruction order, with adjusting the current pixel QP $(Q_{pixel})$ adaptively by (1), where Qcb denotes the current block $\\mathrm{Q}\\mathrm{P}, T_{pred}$ denotes the predicted texture complexity based on the neighboring reconstruction pixels, and parameters $\\delta, Q_{jnd}, Q_{thres}$ and Tthres are preseted on the encoder and decoder side. So no additional syntax need to be transmitted in the bitstream. Moreover, for the transformation-off non-pixel-wise prediction, the straightforward extension of the PWQ method is designed to divide the coding block into simple and complex areas based on the above reference pixels, and reduce the pixel QP in simple areas. Qualitative results in Fig. 1 show that, the PWQ method can significantly improve the subjective quality by reducing the distortions on simple pixels, especially in the flat areas near the object edge and between the words on the screen content, and realizes more fine-grained pixel-level quantization compared with the traditional block-level quantization.","PeriodicalId":209029,"journal":{"name":"2023 Data Compression Conference (DCC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Data Compression Conference (DCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCC55655.2023.00058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a pixel-wise quantization (PWQ) method, which allows to reduce the quantization parameters (QPs) of simple pixels adaptively for the purpose of enhancing the subjective quality, since the distortions on simple pixels are more noticeable than those on complex pixels. For the pixel-wise prediction in Fig. 1, the pixel-wise reconstruction is implemented and the transformation is disabled, where the symbol “=” (or $^{\prime \prime}\vee^{\prime \prime}/^{\prime \prime}\gt^{\prime \prime}$) means the current prediction is the average value of the left and right reconstructions (or the upper/left reconstruction). And the PWQ method is applied in the same prediction direction and reconstruction order, with adjusting the current pixel QP $(Q_{pixel})$ adaptively by (1), where Qcb denotes the current block $\mathrm{Q}\mathrm{P}, T_{pred}$ denotes the predicted texture complexity based on the neighboring reconstruction pixels, and parameters $\delta, Q_{jnd}, Q_{thres}$ and Tthres are preseted on the encoder and decoder side. So no additional syntax need to be transmitted in the bitstream. Moreover, for the transformation-off non-pixel-wise prediction, the straightforward extension of the PWQ method is designed to divide the coding block into simple and complex areas based on the above reference pixels, and reduce the pixel QP in simple areas. Qualitative results in Fig. 1 show that, the PWQ method can significantly improve the subjective quality by reducing the distortions on simple pixels, especially in the flat areas near the object edge and between the words on the screen content, and realizes more fine-grained pixel-level quantization compared with the traditional block-level quantization.