Qianyu Zhang;Bolun Zheng;Xingying Chen;Quan Chen;Zunjie Zhu;Canjin Wang;Zongpeng Li;Xu Jia;Chengang Yan
{"title":"Hierarchical Frequency-Based Upsampling and Refining for HEVC Compressed Video Enhancement","authors":"Qianyu Zhang;Bolun Zheng;Xingying Chen;Quan Chen;Zunjie Zhu;Canjin Wang;Zongpeng Li;Xu Jia;Chengang Yan","doi":"10.1109/TCSVT.2024.3517840","DOIUrl":null,"url":null,"abstract":"Video compression artifacts arise from quantization applied in the frequency domain. Video quality enhancement aims to reduce such compression artifacts and reconstruct a visually pleasant result. While existing methods effectively reduce artifacts in the spatial domain, they often overlook the rich frequency domain information, especially in addressing multi-scale compression artifacts. This work introduces a frequency-domain upsampling strategy within a multi-scale framework, specifically designed to focus on high-frequency details rather than simply blending neighboring pixels during the upsampling process. Our proposed hierarchical frequency-based upsampling and refinement neural network (HFUR) consists of two modules: implicit frequency upsampling (ImpFreqUp) and hierarchical and iterative refinement (HIR). ImpFreqUp exploits the DCT-domain prior derived through an implicit DCT transform, and accurately reconstructs the DCT-domain signal via a coarse-to-fine transfer. Additionally, HIR is introduced to facilitate cross-collaboration and information compensation between the scales, further refining the feature maps and promoting the visual quality of the final output. We demonstrate the effectiveness of the proposed modules via ablation experiments and visualized results. Experimental results demonstrate that HFUR outperforms the state-of-the-art methods up to 0.13dB/0.17dB on both constant bit rate and constant QP modes. The code is available at <uri>https://github.com/zqqqyu/HFUR</uri>.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"4423-4436"},"PeriodicalIF":8.3000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10802929/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Video compression artifacts arise from quantization applied in the frequency domain. Video quality enhancement aims to reduce such compression artifacts and reconstruct a visually pleasant result. While existing methods effectively reduce artifacts in the spatial domain, they often overlook the rich frequency domain information, especially in addressing multi-scale compression artifacts. This work introduces a frequency-domain upsampling strategy within a multi-scale framework, specifically designed to focus on high-frequency details rather than simply blending neighboring pixels during the upsampling process. Our proposed hierarchical frequency-based upsampling and refinement neural network (HFUR) consists of two modules: implicit frequency upsampling (ImpFreqUp) and hierarchical and iterative refinement (HIR). ImpFreqUp exploits the DCT-domain prior derived through an implicit DCT transform, and accurately reconstructs the DCT-domain signal via a coarse-to-fine transfer. Additionally, HIR is introduced to facilitate cross-collaboration and information compensation between the scales, further refining the feature maps and promoting the visual quality of the final output. We demonstrate the effectiveness of the proposed modules via ablation experiments and visualized results. Experimental results demonstrate that HFUR outperforms the state-of-the-art methods up to 0.13dB/0.17dB on both constant bit rate and constant QP modes. The code is available at https://github.com/zqqqyu/HFUR.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.