Hao Wang, Taogang Hou, Tianhui Liu, Jiaxin Li, Tianmiao Wang
{"title":"Encoded Marker Clusters for Auto-Labeling in Optical Motion Capture","authors":"Hao Wang, Taogang Hou, Tianhui Liu, Jiaxin Li, Tianmiao Wang","doi":"10.1145/3716847","DOIUrl":"https://doi.org/10.1145/3716847","url":null,"abstract":"Marker-based optical motion capture (MoCap) is a vital tool in applications such as virtual production, and movement sciences. However, reconstructing scattered MoCap data into real motion sequences is challenging, and data processing is time-consuming and labor-intensive. Here we propose a novel framework for MoCap auto-labeling and matching. In this framework, we designed novel clusters of reflective markers called auto-labeling encoded marker clusters (AEMCs), including clusters with an explicit header (AEMCs-E) and an implicit header (AEMCs-I). Combining cluster design and coding theory gives each cluster a unique codeword for MoCap auto-labeling and matching. Moreover, we provide a method of mapping and decoding for cluster labeling. The labeling results are only determined by the intrinsic characteristics of the clusters instead of the skeleton structure or posture of the subjects. Compared with commercial software and data-driven methods, our method has better labeling accuracy in heterogeneous targets and unknown marker layouts, which demonstrates the promising application of motion capture in humans, rigid or flexible robots.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"86 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143385362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Direct Rendering of Intrinsic Triangulations","authors":"Waldemar Celes","doi":"10.1145/3716314","DOIUrl":"https://doi.org/10.1145/3716314","url":null,"abstract":"Existing intrinsic triangulation frameworks represent powerful tools for geometry processing; however, they all require the extraction of the common subdivision between extrinsic and intrinsic triangulations for visualization and optimized data transfer. We describe an efficient and effective algorithm for directly rendering intrinsic triangulations that avoids extracting common subdivisions. Our strategy is to use GPU shaders to render the intrinsic triangulation while rasterizing extrinsic triangles. We rely on a point-location algorithm supported by a compact data structure, which requires only two values per extrinsic triangle to represent the correspondence between extrinsic and intrinsic triangulations. This data structure is easier to maintain than previous proposals while supporting all the standard topological operations for improving the intrinsic mesh quality, such as edge flips, triangle refinements, and vertex displacements. Computational experiments show that the proposed data structure is numerically robust and can process nearly degenerate triangulations. We also propose a meshless strategy to accurately transfer data from intrinsic to extrinsic triangulations without relying on the extraction of common subdivisions.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"38 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143083145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Texture Size Reduction Through Symmetric Overlap and Texture Carving","authors":"Julian Knodt, Xifeng Gao","doi":"10.1145/3714408","DOIUrl":"https://doi.org/10.1145/3714408","url":null,"abstract":"Maintaining memory efficient 3D assets is critical for game development due to size constraints for applications, and runtime costs such as GPU data transfers. While most prior work on 3D modeling focuses on reducing triangle count, few works focus on reducing texture sizes. We propose an automatic approach to reduce the texture size for 3D models while maintaining the rendered appearance of the original input. The two core components of our approach are: (1) <jats:italic>Overlapping</jats:italic> identical UV charts and <jats:italic>folding</jats:italic> mirrored regions within charts through an optimal transport optimization, and (2) <jats:italic>Carving</jats:italic> redundant and void texels in a UV-aware and texture-aware way without inverting the UV mesh. The first component creates additional void space, while the second removes void space, and their combination can greatly increase texels utilized by the UV mesh at lower texture resolutions. Our method is robust and general, and can process a 3D model with arbitrary UV layout and multiple textures without modifying the 3D mesh. We evaluate our approach on 110 models from the Google Scanned Object dataset and 64 models from Sketchfab. Compared to other approaches, ours has on average 1-3 dB PSNR higher rendering similarity and reduces pixelation in visual comparisons.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"58 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143035154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jorge Condor, Sebastien Speierer, Lukas Bode, Aljaz Bozic, Simon Green, Piotr Didyk, Adrian Jarabo
{"title":"Don't Splat your Gaussians: Volumetric Ray-Traced Primitives for Modeling and Rendering Scattering and Emissive Media","authors":"Jorge Condor, Sebastien Speierer, Lukas Bode, Aljaz Bozic, Simon Green, Piotr Didyk, Adrian Jarabo","doi":"10.1145/3711853","DOIUrl":"https://doi.org/10.1145/3711853","url":null,"abstract":"Efficient scene representations are essential for many computer graphics applications. A general unified representation that can handle both surfaces and volumes simultaneously, remains a research challenge. In this work we propose a compact and efficient alternative to existing volumetric representations for rendering such as voxel grids. Inspired by recent methods for scene reconstruction that leverage mixtures of 3D Gaussians to model radiance fields, we formalize and generalize the modeling of scattering and emissive media using mixtures of simple kernel-based volumetric primitives. We introduce closed-form solutions for transmittance and free-flight distance sampling for different kernels, and propose several optimizations to use our method efficiently within any off-the-shelf volumetric path tracer. We demonstrate our method in both forward and inverse rendering of complex scattering media. Furthermore, we adapt and showcase our method in radiance field optimization and rendering, providing additional flexibility compared to current state of the art given its ray-tracing formulation. We also introduce the Epanechnikov kernel and demonstrate its potential as an efficient alternative to the traditionally-used Gaussian kernel in scene reconstruction tasks. The versatility and physically-based nature of our approach allows us to go beyond radiance fields and bring to kernel-based modeling and rendering any path-tracing enabled functionality such as scattering, relighting and complex camera models.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"33 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jia-Ming Lu, Geng-Chen Cao, Chenfeng Li, Shi-min Hu
{"title":"Implicit Bonded Discrete Element Method with Manifold Optimization","authors":"Jia-Ming Lu, Geng-Chen Cao, Chenfeng Li, Shi-min Hu","doi":"10.1145/3711852","DOIUrl":"https://doi.org/10.1145/3711852","url":null,"abstract":"This paper proposes a novel simulation approach that combines implicit integration with the Bonded Discrete Element Method (BDEM) to achieve faster, more stable and more accurate fracture simulation. The new method leverages the efficiency of implicit schemes in dynamic simulation and the versatility of BDEM in fracture modelling. Specifically, an optimization-based integrator for BDEM is introduced and combined with a manifold optimization approach to accelerate the solution process of the quaternion-constrained system. Our comparative experiments indicate that our method offers better scale consistency and more realistic collision effects than FEM and MPM fragmentation approaches. Additionally, our method achieves a computational speedup of 2.1 ∼ 9.8 times over explicit BDEM methods.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"20 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142940443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Zhou, Tao Huang, Ravi Ramamoorthi, Pradeep Sen, Ling-Qi Yan
{"title":"Appearance-Preserving Scene Aggregation for Level-of-Detail Rendering","authors":"Yang Zhou, Tao Huang, Ravi Ramamoorthi, Pradeep Sen, Ling-Qi Yan","doi":"10.1145/3708343","DOIUrl":"https://doi.org/10.1145/3708343","url":null,"abstract":"Creating an appearance-preserving level-of-detail (LoD) representation for arbitrary 3D scenes is a challenging problem. The appearance of a scene is an intricate combination of both geometry and material models, and is further complicated by correlation due to the spatial configuration of scene elements. We present a novel volumetric representation for the aggregated appearance of complex scenes and a pipeline for LoD generation and rendering. The core of our representation is the <jats:italic>Aggregated Bidirectional Scattering Distribution Function</jats:italic> (ABSDF) that summarizes the far-field appearance of all surfaces inside a voxel. We propose a closed-form factorization of the ABSDF that accounts for spatially varying and orientation-varying material parameters. We tackle the challenge of capturing the correlation existing locally within a voxel and globally across different parts of the scene. Our method faithfully reproduces appearance and achieves higher quality than existing scene filtering methods. The memory footprint and rendering cost of our representation are decoupled from the original scene complexity.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"272 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142857536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unified Pressure, Surface Tension and Friction for SPH Fluids","authors":"Timo Probst, Matthias Teschner","doi":"10.1145/3708034","DOIUrl":"https://doi.org/10.1145/3708034","url":null,"abstract":"Fluid droplets behave significantly different from larger fluid bodies. At smaller scales, surface tension and friction between fluids and the boundary play an essential role and are even able to counteract gravitational forces. There are quite a few existing approaches that model surface tension forces within an SPH environment. However, as often as not, physical correctness and simulation stability are still major concerns with many surface tension formulations. We propose a new approach to compute surface tension that is both robust and produces the right amount of surface tension. Conversely, less attention was given to friction forces at the fluid-boundary interface. Recent experimental research indicates that Coulomb friction can be used to describe the behavior of droplets resting on a slope. Motivated by this, we develop a novel friction force formulation at the fluid-boundary interface following the Coulomb model, which allows us to replicate a new range of well known fluid behavior such as the motion of rain droplets on a window pane. Both forces are combined with an IISPH variant into one unified solver that is able to simultaneously compute strongly coupled surface tension, friction and pressure forces.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"36 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142804573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Representing Long Volumetric Video with Temporal Gaussian Hierarchy","authors":"Zhen Xu, Yinghao Xu, Zhiyuan Yu, Sida Peng, Jiaming Sun, Hujun Bao, Xiaowei Zhou","doi":"10.1145/3687919","DOIUrl":"https://doi.org/10.1145/3687919","url":null,"abstract":"This paper aims to address the challenge of reconstructing long volumetric videos from multi-view RGB videos. Recent dynamic view synthesis methods leverage powerful 4D representations, like feature grids or point cloud sequences, to achieve high-quality rendering results. However, they are typically limited to short (1~2s) video clips and often suffer from large memory footprints when dealing with longer videos. To solve this issue, we propose a novel 4D representation, named Temporal Gaussian Hierarchy, to compactly model long volumetric videos. Our key observation is that there are generally various degrees of temporal redundancy in dynamic scenes, which consist of areas changing at different speeds. Motivated by this, our approach builds a multi-level hierarchy of 4D Gaussian primitives, where each level separately describes scene regions with different degrees of content change, and adaptively shares Gaussian primitives to represent unchanged scene content over different temporal segments, thus effectively reducing the number of Gaussian primitives. In addition, the tree-like structure of the Gaussian hierarchy allows us to efficiently represent the scene at a particular moment with a subset of Gaussian primitives, leading to nearly constant GPU memory usage during the training or rendering regardless of the video length. Moreover, we design a Compact Appearance Model that mixes diffuse and view-dependent Gaussians to further minimize the model size while maintaining the rendering quality. We also develop a rasterization pipeline of Gaussian primitives based on the hardware-accelerated technique to improve rendering speed. Extensive experimental results demonstrate the superiority of our method over alternative methods in terms of training cost, rendering speed, and storage usage. To our knowledge, this work is the first approach capable of efficiently handling hours of volumetric video data while maintaining state-of-the-art rendering quality.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"99 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DARTS: Diffusion Approximated Residual Time Sampling for Time-of-flight Rendering in Homogeneous Scattering Media","authors":"Qianyue He, Dongyu Du, Haitian Jiang, Xin Jin","doi":"10.1145/3687930","DOIUrl":"https://doi.org/10.1145/3687930","url":null,"abstract":"Time-of-flight (ToF) devices have greatly propelled the advancement of various multi-modal perception applications. However, achieving accurate rendering of time-resolved information remains a challenge, particularly in scenes involving complex geometries, diverse materials and participating media. Existing ToF rendering works have demonstrated notable results, yet they struggle with scenes involving scattering media and camera-warped settings. Other steady-state volumetric rendering methods exhibit significant bias or variance when directly applied to ToF rendering tasks. To address these challenges, we integrate transient diffusion theory into path construction and propose novel sampling methods for free-path distance and scattering direction, via resampled importance sampling and offline tabulation. An elliptical sampling method is further adapted to provide controllable vertex connection satisfying any required photon traversal time. In contrast to the existing temporal uniform sampling strategy, our method is the first to consider the contribution of transient radiance to importance-sample the full path, and thus enables improved temporal path construction under multiple scattering settings. The proposed method can be integrated into both path tracing and photon-based frameworks, delivering significant improvements in quality and efficiency with at least a 5x MSE reduction versus SOTA methods in equal rendering time.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"22 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Medial Skeletal Diagram: A Generalized Medial Axis Approach for Compact 3D Shape Representation","authors":"Minghao Guo, Bohan Wang, Wojciech Matusik","doi":"10.1145/3687964","DOIUrl":"https://doi.org/10.1145/3687964","url":null,"abstract":"We propose the Medial Skeletal Diagram, a novel skeletal representation that tackles the prevailing issues around skeleton sparsity and reconstruction accuracy in existing skeletal representations. Our approach augments the continuous elements in the medial axis representation to effectively shift the complexity away from the discrete elements. To that end, we introduce generalized enveloping primitives, an enhancement over the standard primitives in the medial axis, which ensure efficient coverage of intricate local features of the input shape and substantially reduce the number of discrete elements required. Moreover, we present a computational framework for constructing a medial skeletal diagram from an arbitrary closed manifold mesh. Our optimization pipeline ensures that the resulting medial skeletal diagram comprehensively covers the input shape with the fewest primitives. Additionally, each optimized primitive undergoes a post-refinement process to guarantee an accurate match with the source mesh in both geometry and tessellation. We validate our approach on a comprehensive benchmark of 100 shapes, demonstrating the sparsity of the discrete elements and superior reconstruction accuracy across a variety of cases. Finally, we exemplify the versatility of our representation in downstream applications such as shape generation, mesh decomposition, shape optimization, mesh alignment, mesh compression, and user-interactive design.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"10 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}