{"title":"SPAC: Sampling-Based Progressive Attribute Compression for Dense Point Clouds","authors":"Xiaolong Mao;Hui Yuan;Tian Guo;Shiqi Jiang;Raouf Hamzaoui;Sam Kwong","doi":"10.1109/TIP.2025.3565214","DOIUrl":null,"url":null,"abstract":"We propose an end-to-end attribute compression method for dense point clouds. The proposed method combines a frequency sampling module, an adaptive scale feature extraction module with geometry assistance, and a global hyperprior entropy model. The frequency sampling module uses a Hamming window and the Fast Fourier Transform to extract high-frequency components of the point cloud. The difference between the original point cloud and the sampled point cloud is divided into multiple sub-point clouds. These sub-point clouds are then partitioned using an octree, providing a structured input for feature extraction. The feature extraction module integrates adaptive convolutional layers and uses offset-attention to capture both local and global features. Then, a geometry-assisted attribute feature refinement module is used to refine the extracted attribute features. Finally, a global hyperprior model is introduced for entropy encoding. This model propagates hyperprior parameters from the deepest (base) layer to the other layers, further enhancing the encoding efficiency. At the decoder, a mirrored network is used to progressively restore features and reconstruct the color attribute through transposed convolutional layers. The proposed method encodes base layer information at a low bitrate and progressively adds enhancement layer information to improve reconstruction accuracy. Compared to the best anchor of the latest geometry-based point cloud compression (G-PCC) standard that was proposed by the Moving Picture Experts Group (MPEG), the proposed method can achieve an average Bjøntegaard delta bitrate of -24.58% for the Y component (resp. -21.23% for YUV components) on the MPEG Category Solid dataset and -22.48% for the Y component (resp. -17.19% for YUV components) on the MPEG Category Dense dataset. This is the first instance that a learning-based attribute codec outperforms the G-PCC standard on these datasets by following the common test conditions specified by MPEG. Our source code will be made publicly available on <uri>https://github.com/sduxlmao/SPAC</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2939-2953"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11002415/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose an end-to-end attribute compression method for dense point clouds. The proposed method combines a frequency sampling module, an adaptive scale feature extraction module with geometry assistance, and a global hyperprior entropy model. The frequency sampling module uses a Hamming window and the Fast Fourier Transform to extract high-frequency components of the point cloud. The difference between the original point cloud and the sampled point cloud is divided into multiple sub-point clouds. These sub-point clouds are then partitioned using an octree, providing a structured input for feature extraction. The feature extraction module integrates adaptive convolutional layers and uses offset-attention to capture both local and global features. Then, a geometry-assisted attribute feature refinement module is used to refine the extracted attribute features. Finally, a global hyperprior model is introduced for entropy encoding. This model propagates hyperprior parameters from the deepest (base) layer to the other layers, further enhancing the encoding efficiency. At the decoder, a mirrored network is used to progressively restore features and reconstruct the color attribute through transposed convolutional layers. The proposed method encodes base layer information at a low bitrate and progressively adds enhancement layer information to improve reconstruction accuracy. Compared to the best anchor of the latest geometry-based point cloud compression (G-PCC) standard that was proposed by the Moving Picture Experts Group (MPEG), the proposed method can achieve an average Bjøntegaard delta bitrate of -24.58% for the Y component (resp. -21.23% for YUV components) on the MPEG Category Solid dataset and -22.48% for the Y component (resp. -17.19% for YUV components) on the MPEG Category Dense dataset. This is the first instance that a learning-based attribute codec outperforms the G-PCC standard on these datasets by following the common test conditions specified by MPEG. Our source code will be made publicly available on https://github.com/sduxlmao/SPAC