Xiang Dai, Kyrollos Yanny, Kristina Monakhova, Nicholas Antipa
{"title":"Single-shot HDR using conventional image sensor shutter functions and optical randomization","authors":"Xiang Dai, Kyrollos Yanny, Kristina Monakhova, Nicholas Antipa","doi":"10.1145/3748718","DOIUrl":"https://doi.org/10.1145/3748718","url":null,"abstract":"High-dynamic-range (HDR) imaging is an essential technique for overcoming the dynamic range limits of image sensors. The classic method relies on multiple exposures, which slows capture time, resulting in motion artifacts when imaging dynamic scenes. Single-shot HDR imaging alleviates this issue by encoding HDR data in a single exposure, then computationally recovering it. Many established methods use strong image priors to recover improperly exposed detail; these approaches struggle with extended highlight regions. In this work, we demonstrate a novel single-shot HDR capture method that utilizes the <jats:italic toggle=\"yes\">global reset release</jats:italic> (GRR) shutter mode commonly found in off-the-shelf sensors. GRR shutter mode applies a longer exposure time to rows closer to the bottom of the sensor. We use optics that relay a randomly permuted (shuffled) image onto the sensor, effectively creating spatially randomized exposures across the scene. The resulting exposure diversity allows us to recover HDR data by solving an optimization problem with a simple total variation image prior. In simulation, we demonstrate that our method outperforms other single-shot methods when many sensor pixels are saturated (10 <jats:inline-formula content-type=\"math/tex\"> <jats:tex-math notation=\"TeX\" version=\"MathJaX\">(% )</jats:tex-math> </jats:inline-formula> or more), and is competitive at modest saturation (1 <jats:inline-formula content-type=\"math/tex\"> <jats:tex-math notation=\"TeX\" version=\"MathJaX\">(% )</jats:tex-math> </jats:inline-formula> ). Finally, we demonstrate a physical lab prototype that uses an off-the-shelf random fiber bundle for the optical shuffling. The fiber bundle is coupled to a low-cost commercial sensor operating in GRR shutter mode. Our prototype achieves a dynamic range of up to 73dB using an 8-bit sensor with 48dB dynamic range.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"14 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magí Romanyà-Serrasolsas, Juan J. Casafranca, Miguel A. Otaduy
{"title":"Painless Differentiable Rotation Dynamics","authors":"Magí Romanyà-Serrasolsas, Juan J. Casafranca, Miguel A. Otaduy","doi":"10.1145/3730944","DOIUrl":"https://doi.org/10.1145/3730944","url":null,"abstract":"We propose the formulation of forward and differentiable rigid-body dynamics using Lie-algebra rotation derivatives. In particular, we show how this approach can easily be applied to incremental-potential formulations of forward dymamics, and we introduce a novel definition of adjoints for differentiable dynamics. In contrast to other parameterizations of rotations (notably the popular rotation-vector parameterization), our approach leads to painlessly simple and compact derivatives, better conditioning, and higher runtime efficiency. We demonstrate our approach on fundamental rigid-body problems, but also on Cosserat rods as an example of multi-rigid-body dynamics.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"79 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144712155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoyu Huang, Takara Truong, Yunbo Zhang, Fangzhou Yu, Jean Pierre Sleiman, Jessica Hodgins, Koushil Sreenath, Farbod Farshidian
{"title":"Diffuse-CLoC: Guided Diffusion for Physics-based Character Look-ahead Control","authors":"Xiaoyu Huang, Takara Truong, Yunbo Zhang, Fangzhou Yu, Jean Pierre Sleiman, Jessica Hodgins, Koushil Sreenath, Farbod Farshidian","doi":"10.1145/3731206","DOIUrl":"https://doi.org/10.1145/3731206","url":null,"abstract":"We present Diffuse-CLoC, a guided diffusion framework for physics-based look-ahead control that enables intuitive, steerable, and physically realistic motion generation. While existing kinematics motion generation with diffusion models offer intuitive steering capabilities with inference-time conditioning, they often fail to produce physically viable motions. In contrast, recent diffusion-based control policies have shown promise in generating physically realizable motion sequences, but the lack of kinematics prediction limits their steerability. Diffuse-CLoC addresses these challenges through a key insight: modeling the joint distribution of states and actions within a single diffusion model makes action generation steerable by conditioning it on the predicted states. This approach allows us to leverage established conditioning techniques from kinematic motion generation while producing physically realistic motions. As a result, we achieve planning capabilities without the need for a high-level planner. Our method handles a diverse set of unseen long-horizon downstream tasks through a single pre-trained model, including static and dynamic obstacle avoidance, motion in-betweening, and task-space control. Experimental results show that our method significantly outperforms the traditional hierarchical framework of high-level motion diffusion and low-level tracking.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"12 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144712202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theo Braune, Mark Gillespie, Yiying Tong, Mathieu Desbrun
{"title":"Discrete Torsion of Connection Forms on Simplicial Meshes","authors":"Theo Braune, Mark Gillespie, Yiying Tong, Mathieu Desbrun","doi":"10.1145/3731197","DOIUrl":"https://doi.org/10.1145/3731197","url":null,"abstract":"While discrete (metric) connections have become a staple of <jats:italic toggle=\"yes\">n</jats:italic> -vector field design and analysis on simplicial meshes, the notion of torsion of a discrete connection has remained unstudied. This is all the more surprising as torsion is a crucial component in the fundamental theorem of Riemannian geometry, which introduces the existence and uniqueness of the Levi-Civita connection induced by the metric. In this paper, we extend the existing geometry processing toolbox by providing torsion control over discrete connections. Our approach consists in first introducing a new discrete Levi-Civita connection for a metric with locally-constant curvature to replace the hinge connection of a triangle mesh whose curvature is concentrated at singularities; from this reference connection, we define the discrete torsion of a connection to be the discrete dual 1-form by which a connection deviates from our discrete Levi-Civita connection. We discuss how the curvature and torsion of a discrete connection can then be controlled and assigned in a manner consistent with the continuous case. We also illustrate our approach through theoretical analysis and practical examples arising in vector and frame design.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"22 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144712304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Fully-statistical Wave Scattering Model for Heterogeneous Surfaces","authors":"Zhengze Liu, Yuchi Huo, Yifan Peng, Rui Wang","doi":"10.1145/3730828","DOIUrl":"https://doi.org/10.1145/3730828","url":null,"abstract":"Heterogeneous surfaces exhibit spatially varying geometry and material, and therefore admit diverse appearances. Existing computer graphics works can only model heterogeneity using explicit structures or statistical parameters that describe a coarser level of detail. We extend the boundary by introducing a new model that describes the heterogeneous surfaces fully statistically at the microscopic level, with rich geometry and material details that are comparable to the wavelengths of light. We treat the heterogeneous surfaces as a mixture of stochastic vector processes. We adapt the well-known generalized Harvey-Shack theory to quantify the mean scattered intensity, i.e., the BRDF of these surfaces. We further explore the covariance statistic of the scattered field and derive its rank-1 decomposition. This leads to a practical algorithm that samples the speckles (fluctuating intensities) from the statistics, enriching the appearance without explicit definition of heterogeneous surfaces. The formulations are analytic, and we validate the quantities by comprehensive numerical simulations. Our heterogeneous surface model demonstrates various applications including corrosion (natural), particle deposition (man-made), and height-correlated mixture (artistic). Code for this paper is available at https://github.com/Rendering-at-ZJU/HeteroSurface.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144712367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmed H. Mahmoud, Serban D. Porumbescu, John D. Owens
{"title":"Dynamic Mesh Processing on the GPU","authors":"Ahmed H. Mahmoud, Serban D. Porumbescu, John D. Owens","doi":"10.1145/3731162","DOIUrl":"https://doi.org/10.1145/3731162","url":null,"abstract":"We present a system for dynamic triangle mesh processing entirely on the GPU. Our system features an efficient data structure that enables rapid updates to mesh connectivity and attributes. By partitioning the mesh into small patches, we process all dynamic updates for each patch within the GPU's fast shared memory. This approach leverages <jats:italic toggle=\"yes\">speculative processing</jats:italic> for conflict handling, minimizing rollback costs, maximizing parallelism, and reducing locking overhead. Additionally, we introduce a new programming model for dynamic mesh processing. This model provides concise semantics for dynamic updates, abstracting away concerns about conflicting updates during parallel execution. At the core of our model is the <jats:italic toggle=\"yes\">cavity operator</jats:italic> , a general mesh update operator that facilitates any dynamic operation by removing a set of mesh elements and inserting new ones into the resulting void. We applied our system to various GPU applications, including isotropic remeshing, surface tracking, mesh decimation, and Delaunay edge flips. On large inputs, our system achieves an order-of-magnitude speedup compared to multi-threaded CPU solutions and is more than two orders of magnitude faster than state-of-the-art single-threaded CPU solutions. Furthermore, our data structure outperforms state-of-the-art GPU <jats:italic toggle=\"yes\">static</jats:italic> data structures in terms of both speed and memory efficiency.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"21 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144712146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"When Gaussian Meets Surfel: Ultra-fast High-fidelity Radiance Field Rendering","authors":"Keyang Ye, Tianjia Shao, Kun Zhou","doi":"10.1145/3730925","DOIUrl":"https://doi.org/10.1145/3730925","url":null,"abstract":"We introduce Gaussian-enhanced Surfels (GESs), a bi-scale representation for radiance field rendering, wherein a set of 2D opaque surfels with view-dependent colors represent the coarse-scale geometry and appearance of scenes, and a few 3D Gaussians surrounding the surfels supplement fine-scale appearance details. The rendering with GESs consists of two passes - surfels are first rasterized through a standard graphics pipeline to produce depth and color maps, and then Gaussians are splatted with depth testing and color accumulation on each pixel order independently. The optimization of GESs from multi-view images is performed through an elaborate coarse-to-fine procedure, faithfully capturing rich scene appearance. The entirely sorting-free rendering of GESs not only achieves very fast rates, but also produces view-consistent images, successfully avoiding popping artifacts under view changes. The basic GES representation can be easily extended to achieve antialiasing in rendering (Mip-GES), boosted rendering speeds (Speedy-GES) and compact storage (Compact-GES), and reconstruct better scene geometries by replacing 3D Gaussians with 2D Gaussians (2D-GES). Experimental results show that GESs advance the state-of-the-arts as a compelling representation for ultra-fast high-fidelity radiance field rendering.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"27 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144712376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vector-Valued Monte Carlo Integration Using Ratio Control Variates","authors":"Haolin Lu, Delio Vicini, Wesley Chang, Tzu-Mao Li","doi":"10.1145/3731175","DOIUrl":"https://doi.org/10.1145/3731175","url":null,"abstract":"Variance reduction techniques are widely used for reducing the noise of Monte Carlo integration. However, these techniques are typically designed with the assumption that the integrand is scalar-valued. Recognizing that rendering and inverse rendering broadly involve vector-valued integrands, we identify the limitations of classical variance reduction methods in this context. To address this, we introduce ratio control variates, an estimator that leverages a ratio-based approach instead of the conventional difference-based control variates. Our analysis and experiments demonstrate that ratio control variables can significantly reduce the mean squared error of vector-valued integration compared to existing methods and are broadly applicable to various rendering and inverse rendering tasks.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"214 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144712148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AlignTex: Pixel-Precise Texture Generation from Multi-view Artwork","authors":"Yuqing Zhang, Hao Xu, Yiqian Wu, Sirui Chen, Sirui Lin, Xiang Li, Xifeng Gao, Xiaogang Jin","doi":"10.1145/3731158","DOIUrl":"https://doi.org/10.1145/3731158","url":null,"abstract":"Current 3D asset creation pipelines typically consist of three stages: creating multi-view concept art, producing 3D meshes based on the artwork, and painting textures for the meshes—an often labor-intensive process. Automated texture generation offers significant acceleration, but prior methods, which fine-tune 2D diffusion models with multi-view input images, often fail to preserve pixel-level details. These methods primarily emphasize semantic and subject consistency, which do not meet the requirements of artwork-guided texture workflows. To address this, we present AlignTex , a novel framework for generating high-quality textures from 3D meshes and multi-view artwork, ensuring both appearance detail and geometric consistency. AlignTex operates in two stages: aligned image generation and texture refinement. The core of our approach, AlignNet , resolves complex misalignments by extracting information from both the artwork and the mesh, generating images compatible with orthographic projection while maintaining geometric and visual fidelity. After projecting aligned images into the texture space, further refinement addresses seams and self-occlusion using an inpainting model and a geometry-aware texture dilation method. Experimental results demonstrate that AlignTex outperforms baseline methods in generation quality and efficiency, offering a practical solution to enhance 3D asset creation in gaming and film production.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144712152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Order Matters: Learning Element Ordering for Graphic Design Generation","authors":"Bo Yang, Ying Cao","doi":"10.1145/3730858","DOIUrl":"https://doi.org/10.1145/3730858","url":null,"abstract":"The past few years have witnessed an emergent interest in building generative models for the graphic design domain. For adoption of powerful deep generative models with Transformer-based neural backbones, prior approaches formulate designs as ordered sequences of elements, and simply order the elements in a random or raster manner. We argue that such naive ordering methods are sub-optimal and there is room for improving sample quality through a better choice of order between graphic design elements. In this paper, we seek to explore the space of orderings to find the ordering strategy that optimizes the performance of graphic design generation models. For this, we propose a model, namely G enerative O rder L earner (GOL), which trains an autoregressive generator on design sequences, jointly with an ordering network that sort design elements to maximize the generation quality. With unsupervised training on vector graphic design data, our model is capable of learning a content-adaptive ordering approach, called neural order. Our experiments show that the generator trained with our neural order converges faster, achieving remarkably improved generation quality compared with using alternative ordering baselines. We conduct comprehensive analysis of our learned order to have a deeper understanding of its ordering behaviors. In addition, our learned order can generalize well to diffusion-based generative models and help design generators scale up excellently.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"12 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144712154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}