ACM Transactions on Graphics (TOG)最新文献

筛选
英文 中文
CT2Hair: High-Fidelity 3D Hair Modeling using Computed Tomography CT2Hair:高保真3D头发建模使用计算机断层扫描
ACM Transactions on Graphics (TOG) Pub Date : 2023-07-26 DOI: 10.1145/3592106
Yuefan Shen, Shunsuke Saito, Ziyan Wang, O. Maury, Chenglei Wu, J. Hodgins, Youyi Zheng, Giljoo Nam
{"title":"CT2Hair: High-Fidelity 3D Hair Modeling using Computed Tomography","authors":"Yuefan Shen, Shunsuke Saito, Ziyan Wang, O. Maury, Chenglei Wu, J. Hodgins, Youyi Zheng, Giljoo Nam","doi":"10.1145/3592106","DOIUrl":"https://doi.org/10.1145/3592106","url":null,"abstract":"We introduce CT2Hair, a fully automatic framework for creating high-fidelity 3D hair models that are suitable for use in downstream graphics applications. Our approach utilizes real-world hair wigs as input, and is able to reconstruct hair strands for a wide range of hair styles. Our method leverages computed tomography (CT) to create density volumes of the hair regions, allowing us to see through the hair unlike image-based approaches which are limited to reconstructing the visible surface. To address the noise and limited resolution of the input density volumes, we employ a coarse-to-fine approach. This process first recovers guide strands with estimated 3D orientation fields, and then populates dense strands through a novel neural interpolation of the guide strands. The generated strands are then refined to conform to the input density volumes. We demonstrate the robustness of our approach by presenting results on a wide variety of hair styles and conducting thorough evaluations on both real-world and synthetic datasets. Code and data for this paper are at github.com/facebookresearch/CT2Hair.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"5 1","pages":"1 - 13"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90197737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inkjet 4D Print: Self-folding Tessellated Origami Objects by Inkjet UV Printing 喷墨4D打印:自折叠镶嵌折纸对象喷墨UV打印
ACM Transactions on Graphics (TOG) Pub Date : 2023-07-26 DOI: 10.1145/3592409
Koya Narumi, Kazuki Koyama, K. Suto, Yuta Noma, Hiroki Sato, Tomohiro Tachi, Masaaki Sugimoto, T. Igarashi, Yoshihiro Kawahara
{"title":"Inkjet 4D Print: Self-folding Tessellated Origami Objects by Inkjet UV Printing","authors":"Koya Narumi, Kazuki Koyama, K. Suto, Yuta Noma, Hiroki Sato, Tomohiro Tachi, Masaaki Sugimoto, T. Igarashi, Yoshihiro Kawahara","doi":"10.1145/3592409","DOIUrl":"https://doi.org/10.1145/3592409","url":null,"abstract":"We propose Inkjet 4D Print, a self-folding fabrication method of 3D origami tessellations by printing 2D patterns on both sides of a heat-shrinkable base sheet, using a commercialized inkjet ultraviolet (UV) printer. Compared to the previous folding-based 4D printing approach using fused deposition modeling (FDM) 3D printers [An et al. 2018], our method has merits in (1) more than 1200 times higher resolution in terms of the number of self-foldable facets, (2) 2.8 times faster printing speed, and (3) optional full-color decoration. This paper describes the material selection, the folding mechanism, the heating condition, and the printing patterns to self-fold both known and freeform tessellations. We also evaluated the self-folding resolution, the printing and transformation speed, and the shape accuracy of our method. Finally, we demonstrated applications enabled by our self-foldable tessellated objects.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"22 1","pages":"1 - 13"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79736706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Learning Physically Simulated Tennis Skills from Broadcast Videos 从广播视频中学习模拟网球技术
ACM Transactions on Graphics (TOG) Pub Date : 2023-07-26 DOI: 10.1145/3592408
Haotian Zhang, Ye Yuan, Viktor Makoviychuk, Yunrong Guo, S. Fidler, X. B. Peng, K. Fatahalian
{"title":"Learning Physically Simulated Tennis Skills from Broadcast Videos","authors":"Haotian Zhang, Ye Yuan, Viktor Makoviychuk, Yunrong Guo, S. Fidler, X. B. Peng, K. Fatahalian","doi":"10.1145/3592408","DOIUrl":"https://doi.org/10.1145/3592408","url":null,"abstract":"We present a system that learns diverse, physically simulated tennis skills from large-scale demonstrations of tennis play harvested from broadcast videos. Our approach is built upon hierarchical models, combining a low-level imitation policy and a high-level motion planning policy to steer the character in a motion embedding learned from broadcast videos. When deployed at scale on large video collections that encompass a vast set of examples of real-world tennis play, our approach can learn complex tennis shotmaking skills and realistically chain together multiple shots into extended rallies, using only simple rewards and without explicit annotations of stroke types. To address the low quality of motions extracted from broadcast videos, we correct estimated motion with physics-based imitation, and use a hybrid control policy that overrides erroneous aspects of the learned motion embedding with corrections predicted by the high-level policy. We demonstrate that our system produces controllers for physically-simulated tennis players that can hit the incoming ball to target positions accurately using a diverse array of strokes (serves, forehands, and backhands), spins (topspins and slices), and playing styles (one/two-handed backhands, left/right-handed play). Overall, our system can synthesize two physically simulated characters playing extended tennis rallies with simulated racket and ball dynamics. Code and data for this work is available at https://research.nvidia.com/labs/toronto-ai/vid2player3d/.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"16 1","pages":"1 - 14"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83382899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Winding Numbers on Discrete Surfaces 离散曲面上的圈数
ACM Transactions on Graphics (TOG) Pub Date : 2023-07-26 DOI: 10.1145/3592401
Nicole Feng, M. Gillespie, Keenan Crane
{"title":"Winding Numbers on Discrete Surfaces","authors":"Nicole Feng, M. Gillespie, Keenan Crane","doi":"10.1145/3592401","DOIUrl":"https://doi.org/10.1145/3592401","url":null,"abstract":"In the plane, the winding number is the number of times a curve wraps around a given point. Winding numbers are a basic component of geometric algorithms such as point-in-polygon tests, and their generalization to data with noise or topological errors has proven valuable for geometry processing tasks ranging from surface reconstruction to mesh booleans. However, standard definitions do not immediately apply on surfaces, where not all curves bound regions. We develop a meaningful generalization, starting with the well-known relationship between winding numbers and harmonic functions. By processing the derivatives of such functions, we can robustly filter out components of the input that do not bound any region. Ultimately, our algorithm yields (i) a closed, completed version of the input curves, (ii) integer labels for regions that are meaningfully bounded by these curves, and (iii) the complementary curves that do not bound any region. The main computational cost is solving a standard Poisson equation, or for surfaces with nontrivial topology, a sparse linear program. We also introduce special basis functions to represent singularities that naturally occur at endpoints of open curves.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"1 1","pages":"1 - 17"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83540680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
In-Timestep Remeshing for Contacting Elastodynamics 接触弹性动力学的时间步重网格
ACM Transactions on Graphics (TOG) Pub Date : 2023-07-26 DOI: 10.1145/3592428
Z. Ferguson, T. Schneider, D. Kaufman, Daniele Panozzo
{"title":"In-Timestep Remeshing for Contacting Elastodynamics","authors":"Z. Ferguson, T. Schneider, D. Kaufman, Daniele Panozzo","doi":"10.1145/3592428","DOIUrl":"https://doi.org/10.1145/3592428","url":null,"abstract":"We propose In-Timestep Remeshing, a fully coupled, adaptive meshing algorithm for contacting elastodynamics where remeshing steps are tightly integrated, implicitly, within the timestep solve. Our algorithm refines and coarsens the domain automatically by measuring physical energy changes within each ongoing timestep solve. This provides consistent, degree-of-freedom-efficient, productive remeshing that, by construction, is physics-aware and so avoids the errors, over-refinements, artifacts, per-example hand-tuning, and instabilities commonly encountered when remeshing with timestepping methods. Our in-timestep computation then ensures that each simulation step's output is both a converged stable solution on the updated mesh and a temporally consistent trajectory with respect to the model and solution of the last timestep. At the same time, the output is guaranteed safe (intersection- and inversion-free) across all operations. We demonstrate applications across a wide range of extreme stress tests with challenging contacts, sharp geometries, extreme compressions, large timesteps, and wide material stiffness ranges - all scenarios well-appreciated to challenge existing remeshing methods.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"96 1","pages":"1 - 15"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73237598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalizing Shallow Water Simulations with Dispersive Surface Waves 用色散表面波推广浅水模拟
ACM Transactions on Graphics (TOG) Pub Date : 2023-07-26 DOI: 10.1145/3592098
S. Jeschke, C. Wojtan
{"title":"Generalizing Shallow Water Simulations with Dispersive Surface Waves","authors":"S. Jeschke, C. Wojtan","doi":"10.1145/3592098","DOIUrl":"https://doi.org/10.1145/3592098","url":null,"abstract":"This paper introduces a novel method for simulating large bodies of water as a height field. At the start of each time step, we partition the waves into a bulk flow (which approximately satisfies the assumptions of the shallow water equations) and surface waves (which approximately satisfy the assumptions of Airy wave theory). We then solve the two wave regimes separately using appropriate state-of-the-art techniques, and re-combine the resulting wave velocities at the end of each step. This strategy leads to the first heightfield wave model capable of simulating complex interactions between both deep and shallow water effects, like the waves from a boat wake sloshing up onto a beach, or a dam break producing wave interference patterns and eddies. We also analyze the numerical dispersion created by our method and derive an exact correction factor for waves at a constant water depth, giving us a numerically perfect re-creation of theoretical water wave dispersion patterns.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"4 1","pages":"1 - 12"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82415674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ScanBot: Autonomous Reconstruction via Deep Reinforcement Learning ScanBot:基于深度强化学习的自主重建
ACM Transactions on Graphics (TOG) Pub Date : 2023-07-26 DOI: 10.1145/3592113
Hezhi Cao, Xia Xi, Guan Wu, Ruizhen Hu, Ligang Liu
{"title":"ScanBot: Autonomous Reconstruction via Deep Reinforcement Learning","authors":"Hezhi Cao, Xia Xi, Guan Wu, Ruizhen Hu, Ligang Liu","doi":"10.1145/3592113","DOIUrl":"https://doi.org/10.1145/3592113","url":null,"abstract":"Autoscanning of an unknown environment is the key to many AR/VR and robotic applications. However, autonomous reconstruction with both high efficiency and quality remains a challenging problem. In this work, we propose a reconstruction-oriented autoscanning approach, called ScanBot, which utilizes hierarchical deep reinforcement learning techniques for global region-of-interest (ROI) planning to improve the scanning efficiency and local next-best-view (NBV) planning to enhance the reconstruction quality. Given the partially reconstructed scene, the global policy designates an ROI with insufficient exploration or reconstruction. The local policy is then applied to refine the reconstruction quality of objects in this region by planning and scanning a series of NBVs. A novel mixed 2D-3D representation is designed for these policies, where a 2D quality map with tailored quality channels encoding the scanning progress is consumed by the global policy, and a coarse-to-fine 3D volumetric representation that embodies both local environment and object completeness is fed to the local policy. These two policies iterate until the whole scene has been completely explored and scanned. To speed up the learning of complex environmental dynamics and enhance the agent's memory for spatial-temporal inference, we further introduce two novel auxiliary learning tasks to guide the training of our global policy. Thorough evaluations and comparisons are carried out to show the feasibility of our proposed approach and its advantages over previous methods. Code and data are available at https://github.com/HezhiCao/Scanbot.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"142 1","pages":"1 - 16"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80159647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eventfulness for Interactive Video Alignment 交互式视频对齐的事件性
ACM Transactions on Graphics (TOG) Pub Date : 2023-07-26 DOI: 10.1145/3592118
Jiatian Sun, Longxiuling Deng, Triantafyllos Afouras, Andrew Owens, Abe Davis
{"title":"Eventfulness for Interactive Video Alignment","authors":"Jiatian Sun, Longxiuling Deng, Triantafyllos Afouras, Andrew Owens, Abe Davis","doi":"10.1145/3592118","DOIUrl":"https://doi.org/10.1145/3592118","url":null,"abstract":"Humans are remarkably sensitive to the alignment of visual events with other stimuli, which makes synchronization one of the hardest tasks in video editing. A key observation of our work is that most of the alignment we do involves salient localizable events that occur sparsely in time. By learning how to recognize these events, we can greatly reduce the space of possible synchronizations that an editor or algorithm has to consider. Furthermore, by learning descriptors of these events that capture additional properties of visible motion, we can build active tools that adapt their notion of eventfulness to a given task as they are being used. Rather than learning an automatic solution to one specific problem, our goal is to make a much broader class of interactive alignment tasks significantly easier and less time-consuming. We show that a suitable visual event descriptor can be learned entirely from stochastically-generated synthetic video. We then demonstrate the usefulness of learned and adaptive eventfulness by integrating it in novel interactive tools for applications including audio-driven time warping of video and the extraction and application of sound effects across different videos.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"79 1","pages":"1 - 10"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81431585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meso-Facets for Goniochromatic 3D Printing 用于角色3D打印的介面
ACM Transactions on Graphics (TOG) Pub Date : 2023-07-26 DOI: 10.1145/3592137
Lubna Abu Rmaileh, A. Brunton
{"title":"Meso-Facets for Goniochromatic 3D Printing","authors":"Lubna Abu Rmaileh, A. Brunton","doi":"10.1145/3592137","DOIUrl":"https://doi.org/10.1145/3592137","url":null,"abstract":"Goniochromatic materials and objects appear to have different colors depending on viewing direction. This occurs in nature, such as in wood or minerals, and in human-made objects such as metal and effect pigments. In this paper, we propose algorithms to control multi-material 3D printers to produce goniochromatic effects on arbitrary surfaces by procedurally augmenting the input surface with meso-facets, which allow distinct colors to be assigned to different viewing directions of the input surface while introducing minimal changes to that surface. Previous works apply only to 2D or 2.5D surfaces, require multiple fabrication technologies, or make considerable changes to the input surface and require special post-processing, whereas our approach requires a single fabrication technology and no special post-processing. Our framework is general, allowing different generating functions for both the shape and color of the facets. Working with implicit representations allows us to generate geometric features at the limit of device resolution without tessellation. We evaluate our approach for performance, showing negligible overhead compared to baseline color 3D print processing, and for goniochromatic quality.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"1 1","pages":"1 - 12"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82969343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Revisiting controlled mixture sampling for rendering applications 重新审视渲染应用程序的受控混合采样
ACM Transactions on Graphics (TOG) Pub Date : 2023-07-26 DOI: 10.1145/3592435
Qingqin Hua, Pascal Grittmann, P. Slusallek
{"title":"Revisiting controlled mixture sampling for rendering applications","authors":"Qingqin Hua, Pascal Grittmann, P. Slusallek","doi":"10.1145/3592435","DOIUrl":"https://doi.org/10.1145/3592435","url":null,"abstract":"Monte Carlo rendering makes heavy use of mixture sampling and multiple importance sampling (MIS). Previous work has shown that control variates can be used to make such mixtures more efficient and more robust. However, the existing approaches failed to yield practical applications, chiefly because their underlying theory is based on the unrealistic assumption that a single mixture is optimized for a single integral. This is in stark contrast with rendering reality, where millions of integrals are computed---one per pixel---and each is infinitely recursive. We adapt and extend the theory introduced by previous work to tackle the challenges of real-world rendering applications. We achieve robust mixture sampling and (approximately) optimal MIS weighting for common applications such as light selection, BSDF sampling, and path guiding.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"2014 1","pages":"1 - 13"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87837137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信