{"title":"DQT-CALF: Content adaptive neural network based In-Loop filter in VVC using dual query transformer","authors":"Yunfeng Liu, Cheolkon Jung","doi":"10.1016/j.neucom.2025.130064","DOIUrl":null,"url":null,"abstract":"<div><div>As auxiliary inputs for the neural network-based in-loop filter (NNLF) in the versatile video coding (VVC), the prediction frame, partition map and quantization parameter (QP) map are effectively used for enhancing the reconstruction frame, i.e., main input. The prediction frame and partition map play a crucial role in reducing compression artifacts, while the QP map distinguishes between inputs with different QPs. However, direct concatenation of the auxiliary inputs with the reconstruction frame may not fully leverage their advantages potentially causing conflicting effects. In this paper, we propose a content adaptive NNLF in VVC using dual query transformer (DQT), named DQT-CALF. We adopt DQT based on a dual query mechanism to effectively fuse low-frequency global features and high-frequency local features, thereby learning rich feature representations. The dual query mechanism enhances comprehensiveness and representation ability of the model, reduces the risk of information loss, and is suitable for fusion of data multiple types. DQT-CALF mainly includes three parts of feature extraction (head), feature enhancement (body), and reconstruction (tail). In feature extraction, we use a parallel network structure for the main and auxiliary inputs and assign feature maps to different weights according to the richness of the input information. For the auxiliary inputs, we do not directly concatenate them with the reconstruction frame, but generate a spatial attention map based on them to enhance the reconstruction frame. In feature enhancement, we design a multi-type feature fusion module that divides the input tensor into the low-frequency and high-frequency features from the frequency viewpoint and the local and global features from the spatial viewpoint. The two groups of features are processed separately and are mutually transformed by the high-frequency local feature generation and low-frequency global feature generation, respectively. In reconstruction, we reconstruct the frame using a 3x3 convolution layer and a pixel-shuffle layer with a long skip connection. Experimental results demonstrate that DQT-CALF achieves average BD rate gains of {8.79% (Y), 22.09% (U), 22.99% (V)} and {9.68% (Y), 22.27% (U), 22.52% (V)} over the VTM-11.0_NNVC-3.0 anchor under all intra (AI) and random access (RA) configurations, respectively.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"637 ","pages":"Article 130064"},"PeriodicalIF":5.5000,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225007362","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
As auxiliary inputs for the neural network-based in-loop filter (NNLF) in the versatile video coding (VVC), the prediction frame, partition map and quantization parameter (QP) map are effectively used for enhancing the reconstruction frame, i.e., main input. The prediction frame and partition map play a crucial role in reducing compression artifacts, while the QP map distinguishes between inputs with different QPs. However, direct concatenation of the auxiliary inputs with the reconstruction frame may not fully leverage their advantages potentially causing conflicting effects. In this paper, we propose a content adaptive NNLF in VVC using dual query transformer (DQT), named DQT-CALF. We adopt DQT based on a dual query mechanism to effectively fuse low-frequency global features and high-frequency local features, thereby learning rich feature representations. The dual query mechanism enhances comprehensiveness and representation ability of the model, reduces the risk of information loss, and is suitable for fusion of data multiple types. DQT-CALF mainly includes three parts of feature extraction (head), feature enhancement (body), and reconstruction (tail). In feature extraction, we use a parallel network structure for the main and auxiliary inputs and assign feature maps to different weights according to the richness of the input information. For the auxiliary inputs, we do not directly concatenate them with the reconstruction frame, but generate a spatial attention map based on them to enhance the reconstruction frame. In feature enhancement, we design a multi-type feature fusion module that divides the input tensor into the low-frequency and high-frequency features from the frequency viewpoint and the local and global features from the spatial viewpoint. The two groups of features are processed separately and are mutually transformed by the high-frequency local feature generation and low-frequency global feature generation, respectively. In reconstruction, we reconstruct the frame using a 3x3 convolution layer and a pixel-shuffle layer with a long skip connection. Experimental results demonstrate that DQT-CALF achieves average BD rate gains of {8.79% (Y), 22.09% (U), 22.99% (V)} and {9.68% (Y), 22.27% (U), 22.52% (V)} over the VTM-11.0_NNVC-3.0 anchor under all intra (AI) and random access (RA) configurations, respectively.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.