{"title":"A Deep Transformer-Based Fast CU Partition Approach for Inter-Mode VVC","authors":"Tianyi Li;Mai Xu;Zheng Liu;Ying Chen;Kai Li","doi":"10.1109/TIP.2025.3533204","DOIUrl":null,"url":null,"abstract":"The latest versatile video coding (VVC) standard proposed by the Joint Video Exploration Team (JVET) has significantly improved coding efficiency compared to that of its predecessor, while introducing an extremely higher computational complexity by <inline-formula> <tex-math>$6\\sim 26$ </tex-math></inline-formula> times. The quad-tree plus multi-type tree (QTMT)-based coding unit (CU) partition accounts for most of the encoding time in VVC encoding. This paper proposes a data-driven fast CU partition approach based on an efficient Transformer model to accelerate VVC inter-coding. First, we establish a large-scale database for inter-mode VVC, comprising diverse CU partition patterns from more than 800 raw video sequences across various resolutions and contents. Next, we propose a deep neural network model with a Transformer-based temporal topology for predicting the CU partition, named as TCP-Net, which is adaptive to the group of pictures (GOP) hierarchy in VVC. Then, we design a two-stage structured output for TCP-Net, reflecting both the locations of CU edges and the split modes of all possible CUs. Accordingly, we develop a dual-supervised optimization mechanism to train the TCP-Net model with improved accuracy. The experimental results have verified that our approach can reduce the encoding time by <inline-formula> <tex-math>$46.89\\sim 55.91$ </tex-math></inline-formula>% with negligible rate-distortion (RD) degradation, outperforming other state-of-the-art approaches.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1133-1148"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10857954/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The latest versatile video coding (VVC) standard proposed by the Joint Video Exploration Team (JVET) has significantly improved coding efficiency compared to that of its predecessor, while introducing an extremely higher computational complexity by $6\sim 26$ times. The quad-tree plus multi-type tree (QTMT)-based coding unit (CU) partition accounts for most of the encoding time in VVC encoding. This paper proposes a data-driven fast CU partition approach based on an efficient Transformer model to accelerate VVC inter-coding. First, we establish a large-scale database for inter-mode VVC, comprising diverse CU partition patterns from more than 800 raw video sequences across various resolutions and contents. Next, we propose a deep neural network model with a Transformer-based temporal topology for predicting the CU partition, named as TCP-Net, which is adaptive to the group of pictures (GOP) hierarchy in VVC. Then, we design a two-stage structured output for TCP-Net, reflecting both the locations of CU edges and the split modes of all possible CUs. Accordingly, we develop a dual-supervised optimization mechanism to train the TCP-Net model with improved accuracy. The experimental results have verified that our approach can reduce the encoding time by $46.89\sim 55.91$ % with negligible rate-distortion (RD) degradation, outperforming other state-of-the-art approaches.