{"title":"Ultron: Enabling Temporal Geometry Compression of 3D Mesh Sequences using Temporal Correspondence and Mesh Deformation","authors":"Haichao Zhu","doi":"arxiv-2409.05151","DOIUrl":null,"url":null,"abstract":"With the advancement of computer vision, dynamic 3D reconstruction techniques\nhave seen significant progress and found applications in various fields.\nHowever, these techniques generate large amounts of 3D data sequences,\nnecessitating efficient storage and transmission methods. Existing 3D model\ncompression methods primarily focus on static models and do not consider\ninter-frame information, limiting their ability to reduce data size. Temporal\nmesh compression, which has received less attention, often requires all input\nmeshes to have the same topology, a condition rarely met in real-world\napplications. This research proposes a method to compress mesh sequences with\narbitrary topology using temporal correspondence and mesh deformation. The\nmethod establishes temporal correspondence between consecutive frames, applies\na deformation model to transform the mesh from one frame to subsequent frames,\nand replaces the original meshes with deformed ones if the quality meets a\ntolerance threshold. Extensive experiments demonstrate that this method can\nachieve state-of-the-art performance in terms of compression performance. The\ncontributions of this paper include a geometry and motion-based model for\nestablishing temporal correspondence between meshes, a mesh quality assessment\nfor temporal mesh sequences, an entropy-based encoding and corner table-based\nmethod for compressing mesh sequences, and extensive experiments showing the\neffectiveness of the proposed method. All the code will be open-sourced at\nhttps://github.com/lszhuhaichao/ultron.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the advancement of computer vision, dynamic 3D reconstruction techniques
have seen significant progress and found applications in various fields.
However, these techniques generate large amounts of 3D data sequences,
necessitating efficient storage and transmission methods. Existing 3D model
compression methods primarily focus on static models and do not consider
inter-frame information, limiting their ability to reduce data size. Temporal
mesh compression, which has received less attention, often requires all input
meshes to have the same topology, a condition rarely met in real-world
applications. This research proposes a method to compress mesh sequences with
arbitrary topology using temporal correspondence and mesh deformation. The
method establishes temporal correspondence between consecutive frames, applies
a deformation model to transform the mesh from one frame to subsequent frames,
and replaces the original meshes with deformed ones if the quality meets a
tolerance threshold. Extensive experiments demonstrate that this method can
achieve state-of-the-art performance in terms of compression performance. The
contributions of this paper include a geometry and motion-based model for
establishing temporal correspondence between meshes, a mesh quality assessment
for temporal mesh sequences, an entropy-based encoding and corner table-based
method for compressing mesh sequences, and extensive experiments showing the
effectiveness of the proposed method. All the code will be open-sourced at
https://github.com/lszhuhaichao/ultron.