Yiming Wang , Qian Huang , Bin Tang , Xin Li , Xing Li
{"title":"Multiscale motion-aware and spatial–temporal-channel contextual coding network for learned video compression","authors":"Yiming Wang , Qian Huang , Bin Tang , Xin Li , Xing Li","doi":"10.1016/j.knosys.2025.113401","DOIUrl":null,"url":null,"abstract":"<div><div>Video compression performance is significantly dependent on accurate motion prediction and efficient entropy coding. However, most current learned video compression methods rely on pre-trained optical flow networks or simplistic lightweight models for motion estimation, which fail to fully leverage the spatial–temporal characteristics of video sequences. This often brings higher bit consumption and distortion in reconstructed frames. Additionally, these methods frequently overlook the rich contextual information present within feature channels that could enhance entropy modeling. To address these issues, we propose a motion-aware and spatial–temporal-channel contextual coding-based video compression network (MASTC-VC). Specifically, we introduce a multiscale motion-aware module (MS-MAM) that estimates effective motion information across both spatial and temporal dimensions in a coarse-to-fine manner. We also propose a spatial–temporal-channel contextual module (STCCM) which optimizes entropy coding by exploiting latent representation correlations, leading to bit savings from spatial, temporal and channel perspectives. On top of it, we further introduce an uneven channel grouping scheme to strike a balance between computational complexity and rate–distortion (RD) performance. Extensive experiments demonstrate that MASTC-VC outperforms previous learned models across three benchmark datasets. Notably, our method achieves an average 10.15% BD-rate savings compared to H.265/HEVC (HM-16.20) using the PSNR metric and average 23.93% BD-rate savings against H.266/VVC (VTM-13.2) using the MS-SSIM metric.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"316 ","pages":"Article 113401"},"PeriodicalIF":7.2000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125004484","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Multiscale motion-aware and spatial–temporal-channel contextual coding network for learned video compression
Video compression performance is significantly dependent on accurate motion prediction and efficient entropy coding. However, most current learned video compression methods rely on pre-trained optical flow networks or simplistic lightweight models for motion estimation, which fail to fully leverage the spatial–temporal characteristics of video sequences. This often brings higher bit consumption and distortion in reconstructed frames. Additionally, these methods frequently overlook the rich contextual information present within feature channels that could enhance entropy modeling. To address these issues, we propose a motion-aware and spatial–temporal-channel contextual coding-based video compression network (MASTC-VC). Specifically, we introduce a multiscale motion-aware module (MS-MAM) that estimates effective motion information across both spatial and temporal dimensions in a coarse-to-fine manner. We also propose a spatial–temporal-channel contextual module (STCCM) which optimizes entropy coding by exploiting latent representation correlations, leading to bit savings from spatial, temporal and channel perspectives. On top of it, we further introduce an uneven channel grouping scheme to strike a balance between computational complexity and rate–distortion (RD) performance. Extensive experiments demonstrate that MASTC-VC outperforms previous learned models across three benchmark datasets. Notably, our method achieves an average 10.15% BD-rate savings compared to H.265/HEVC (HM-16.20) using the PSNR metric and average 23.93% BD-rate savings against H.266/VVC (VTM-13.2) using the MS-SSIM metric.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.