Yu Xu, Fan Tang, Juan Cao, Yuxin Zhang, Oliver Deussen, Weiming Dong, Jintao Li, Tong-Yee Lee
{"title":"B4M : B reaking Low-Rank Adapter for M aking Content-Style Customization","authors":"Yu Xu, Fan Tang, Juan Cao, Yuxin Zhang, Oliver Deussen, Weiming Dong, Jintao Li, Tong-Yee Lee","doi":"10.1145/3728461","DOIUrl":"https://doi.org/10.1145/3728461","url":null,"abstract":"Personalized generation paradigms empower designers to customize visual intellectual properties with the help of textual descriptions by adapting pre-trained text-to-image models on a few images. Recent studies focus on simultaneously customizing content and detailed visual style in images but often struggle with entangling the two. In this study, we reconsider the customization of content and style concepts from the perspective of parameter space construction. Unlike existing methods that utilize a shared parameter space for content and style learning, we propose a novel framework that separates the parameter space to facilitate individual learning of content and style by introducing “partly learnable projection” ( PLP ) matrices to separate the original adapters into divided sub-parameter spaces. A “ break-for-make ” customization learning pipeline based on PLP is proposed: we first break the original adapters into “up projection” and “down projection” for content and style concept under orthogonal prior and then make the entity parameter space by reconstructing the content and style PLPs matrices by using Riemannian precondition to adaptively balance content and style learning. Experiments on various styles, including textures, materials, and artistic style, show that our method outperforms state-of-the-art single/multiple concept learning pipelines regarding content-style-prompt alignment. Code is available at: https://github.com/ICTMCG/Break-for-make.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"79 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gilles Daviet, Tianchang Shen, Nicholas Sharp, David I.W. Levin
{"title":"Neurally Integrated Finite Elements for Differentiable Elasticity on Evolving Domains","authors":"Gilles Daviet, Tianchang Shen, Nicholas Sharp, David I.W. Levin","doi":"10.1145/3727874","DOIUrl":"https://doi.org/10.1145/3727874","url":null,"abstract":"We present an elastic simulator for domains defined as evolving implicit functions, which is efficient, robust, and differentiable with respect to both shape and material. This simulator is motivated by applications in 3D reconstruction: it is increasingly effective to recover geometry from observed images as implicit functions, but physical applications require accurately simulating and optimizing-for the behavior of such shapes under deformation, which has remained challenging. Our key technical innovation is to train a small neural network to fit quadrature points for robust numerical integration on implicit grid cells. When coupled with a Mixed Finite Element formulation, this yields a smooth, fully differentiable simulation model connecting the evolution of the underlying implicit surface to its elastic response. We demonstrate the efficacy of our approach on forward simulation of implicits, direct simulation of 3D shapes during editing, and novel physics-based shape and topology optimizations in conjunction with differentiable rendering.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"103 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diffusing Winding Gradients (DWG): A Parallel and Scalable Method for 3D Reconstruction from Unoriented Point Clouds","authors":"Weizhou Liu, Jiaze Li, Xuhui Chen, Fei Hou, Shiqing Xin, Xingce Wang, Zhongke Wu, Chen Qian, Ying He","doi":"10.1145/3727873","DOIUrl":"https://doi.org/10.1145/3727873","url":null,"abstract":"This paper presents Diffusing Winding Gradients (DWG) for reconstructing watertight surfaces from unoriented point clouds. Our method exploits the alignment between the gradients of screened generalized winding number (GWN) field–a robust variant of the standard GWN field– and globally consistent normals to orient points. Starting with an unoriented point cloud, DWG initially assigns a random normal to each point. It computes the corresponding sGWN field and extract a level set whose iso-value is the average GWN values across all input points. The gradients of this level set are then utilized to update the point normals. This cycle of recomputing the sGWN field and updating point normals is repeated until the sGWN level sets stabilize and their gradients cease to change. Unlike conventional methods, DWG does not rely on solving linear systems or optimizing objective functions, which simplifies its implementation and enhances its suitability for efficient parallel execution. Experimental results demonstrate that DWG significantly outperforms existing methods in terms of runtime performance. For large-scale models with 10 to 20 million points, our CUDA implementation on an NVIDIA GTX 4090 GPU achieves speeds 30-120 times faster than iPSR, the leading sequential method, tested on a high-end PC with an Intel i9 CPU. Furthermore, by employing a screened variant of GWN, DWG demonstrates enhanced robustness against noise and outliers, and proves effective for models with thin structures and real-world inputs with overlapping and misaligned scans. For source code and additional results, visit our project webpage: https://dwgtech.github.io/.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"50 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast Determination and Computation of Self-intersections for NURBS Surfaces","authors":"Kai Li, Xiaohong Jia, Falai Chen","doi":"10.1145/3727620","DOIUrl":"https://doi.org/10.1145/3727620","url":null,"abstract":"Self-intersections of NURBS surfaces are unavoidable during the CAD modeling process, especially in operations such as offset or sweeping. The existence of self-intersections might cause problems in the latter simulation and manufacturing process. Therefore, fast detection of self-intersections of NURBS is highly demanded in industrial applications. Self-intersections are essentially singular points on the surface. Although there is a long history of exploring singular points in mathematics community, the fast and robust determination and computation of self-intersections have been a challenging problem in practice. In this paper, we construct an algebraic signature whose non-negativity is proven to be sufficient for excluding the existence of self-intersections from a global perspective. An efficient algorithm for determining the existence of self-intersections is provided by recursively using this signature. Once the self-intersection is detected, if necessary, the self-intersection locus can also be computed via a further recursively cross-use of this signature and the surface-surface intersection function. Various experiments and comparisons with existing methods, as well as geometry kernels, including OCCT and ACIS, validate the robustness and efficiency of our algorithm. We also adapt our algorithm to self-intersection elimination, self-intersection trimming, and applications in mesh generation, boolean operation, and shelling.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"57 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sara Sabour, Lily Goli, George Kopanas, Mark Matthews, Dmitry Lagun, Leonidas Guibas, Alec Jacobson, David Fleet, Andrea Tagliasacchi
{"title":"SpotLessSplats: Ignoring Distractors in 3D Gaussian Splatting","authors":"Sara Sabour, Lily Goli, George Kopanas, Mark Matthews, Dmitry Lagun, Leonidas Guibas, Alec Jacobson, David Fleet, Andrea Tagliasacchi","doi":"10.1145/3727143","DOIUrl":"https://doi.org/10.1145/3727143","url":null,"abstract":"3D Gaussian Splatting (3DGS) is a promising technique for 3D reconstruction, offering efficient training and rendering speeds, making it suitable for real-time applications. However, current methods require highly controlled environments—no moving people or wind-blown elements, and consistent lighting—to meet the inter-view consistency assumption of 3DGS. This makes reconstruction of real-world captures problematic. We present SpotLessSplats, an approach that leverages pre-trained and general-purpose features coupled with robust optimization to effectively ignore transient distractors. Our method achieves state-of-the-art reconstruction quality both visually and quantitatively, on casual captures.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"100 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143733968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NeST: Neural Stress Tensor Tomography by leveraging 3D Photoelasticity","authors":"Akshat Dave, Tianyi Zhang, Aaron Young, Ramesh Raskar, Wolfgang Heidrich, Ashok Veeraraghavan","doi":"10.1145/3723873","DOIUrl":"https://doi.org/10.1145/3723873","url":null,"abstract":"Photoelasticity enables full-field stress analysis in transparent objects through stress-induced birefringence. Existing techniques are limited to 2D slices and require destructively slicing the object. Recovering the internal 3D stress distribution of the entire object is challenging as it involves solving a tensor tomography problem and handling phase wrapping ambiguities. We introduce NeST, an analysis-by-synthesis approach for reconstructing 3D stress tensor fields as neural implicit representations from polarization measurements. Our key insight is to jointly handle phase unwrapping and tensor tomography using a differentiable forward model based on Jones calculus. Our non-linear model faithfully matches real captures, unlike prior linear approximations. We develop an experimental multi-axis polariscope setup to capture 3D photoelasticity and experimentally demonstrate that NeST reconstructs the internal stress distribution for objects with varying shape and force conditions. Additionally, we showcase novel applications in stress analysis, such as visualizing photoelastic fringes by virtually slicing the object and viewing photoelastic fringes from unseen viewpoints. NeST paves the way for scalable non-destructive 3D photoelastic analysis.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"6 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kinematic Motion Retargeting for Contact-Rich Anthropomorphic Manipulations","authors":"Arjun Sriram Lakshmipathy, Jessica Hodgins, Nancy Pollard","doi":"10.1145/3723872","DOIUrl":"https://doi.org/10.1145/3723872","url":null,"abstract":"Hand motion capture data is now relatively easy to obtain, even for complicated grasps; however, this data is of limited use without the ability to retarget it onto the hands of a specific character or robot. The target hand may differ dramatically in geometry, number of degrees of freedom (DOFs), or number of fingers. We present a simple, but effective framework capable of kinematically retargeting human hand-object manipulations from a publicly available dataset to diverse target hands through the exploitation of contact areas. We do so by formulating the retargeting operation as a non-isometric shape matching problem and use a combination of both surface contact and marker data to progressively estimate, refine, and fit the final target hand trajectory using inverse kinematics (IK). Foundational to our framework is the introduction of a novel shape matching process, which we show enables predictable and robust transfer of contact data over full manipulations (pre-grasp, pickup, in-hand re-orientation, and release) while providing an intuitive means for artists to specify correspondences with relatively few inputs. We validate our framework through demonstrations across five different hands and six motions of different objects. We additionally demonstrate a bimanual task, perform stress tests, and compare our method against existing hand retargeting approaches. Finally, we demonstrate our method enabling novel capabilities such as object substitution and the ability to visualize the impact of hand design choices over full trajectories.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"59 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Wang, Taogang Hou, Tianhui Liu, Jiaxin Li, Tianmiao Wang
{"title":"Encoded Marker Clusters for Auto-Labeling in Optical Motion Capture","authors":"Hao Wang, Taogang Hou, Tianhui Liu, Jiaxin Li, Tianmiao Wang","doi":"10.1145/3716847","DOIUrl":"https://doi.org/10.1145/3716847","url":null,"abstract":"Marker-based optical motion capture (MoCap) is a vital tool in applications such as virtual production, and movement sciences. However, reconstructing scattered MoCap data into real motion sequences is challenging, and data processing is time-consuming and labor-intensive. Here we propose a novel framework for MoCap auto-labeling and matching. In this framework, we designed novel clusters of reflective markers called auto-labeling encoded marker clusters (AEMCs), including clusters with an explicit header (AEMCs-E) and an implicit header (AEMCs-I). Combining cluster design and coding theory gives each cluster a unique codeword for MoCap auto-labeling and matching. Moreover, we provide a method of mapping and decoding for cluster labeling. The labeling results are only determined by the intrinsic characteristics of the clusters instead of the skeleton structure or posture of the subjects. Compared with commercial software and data-driven methods, our method has better labeling accuracy in heterogeneous targets and unknown marker layouts, which demonstrates the promising application of motion capture in humans, rigid or flexible robots.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"86 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143385362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Direct Rendering of Intrinsic Triangulations","authors":"Waldemar Celes","doi":"10.1145/3716314","DOIUrl":"https://doi.org/10.1145/3716314","url":null,"abstract":"Existing intrinsic triangulation frameworks represent powerful tools for geometry processing; however, they all require the extraction of the common subdivision between extrinsic and intrinsic triangulations for visualization and optimized data transfer. We describe an efficient and effective algorithm for directly rendering intrinsic triangulations that avoids extracting common subdivisions. Our strategy is to use GPU shaders to render the intrinsic triangulation while rasterizing extrinsic triangles. We rely on a point-location algorithm supported by a compact data structure, which requires only two values per extrinsic triangle to represent the correspondence between extrinsic and intrinsic triangulations. This data structure is easier to maintain than previous proposals while supporting all the standard topological operations for improving the intrinsic mesh quality, such as edge flips, triangle refinements, and vertex displacements. Computational experiments show that the proposed data structure is numerically robust and can process nearly degenerate triangulations. We also propose a meshless strategy to accurately transfer data from intrinsic to extrinsic triangulations without relying on the extraction of common subdivisions.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"38 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143083145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Texture Size Reduction Through Symmetric Overlap and Texture Carving","authors":"Julian Knodt, Xifeng Gao","doi":"10.1145/3714408","DOIUrl":"https://doi.org/10.1145/3714408","url":null,"abstract":"Maintaining memory efficient 3D assets is critical for game development due to size constraints for applications, and runtime costs such as GPU data transfers. While most prior work on 3D modeling focuses on reducing triangle count, few works focus on reducing texture sizes. We propose an automatic approach to reduce the texture size for 3D models while maintaining the rendered appearance of the original input. The two core components of our approach are: (1) <jats:italic>Overlapping</jats:italic> identical UV charts and <jats:italic>folding</jats:italic> mirrored regions within charts through an optimal transport optimization, and (2) <jats:italic>Carving</jats:italic> redundant and void texels in a UV-aware and texture-aware way without inverting the UV mesh. The first component creates additional void space, while the second removes void space, and their combination can greatly increase texels utilized by the UV mesh at lower texture resolutions. Our method is robust and general, and can process a 3D model with arbitrary UV layout and multiple textures without modifying the 3D mesh. We evaluate our approach on 110 models from the Google Scanned Object dataset and 64 models from Sketchfab. Compared to other approaches, ours has on average 1-3 dB PSNR higher rendering similarity and reduces pixelation in visual comparisons.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"58 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143035154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}