Proceedings of the 27th annual conference on Computer graphics and interactive techniques最新文献

筛选
英文 中文
Displaced subdivision surfaces 位移细分面
Aaron W. F. Lee, Henry P. Moreton, Hugues Hoppe
{"title":"Displaced subdivision surfaces","authors":"Aaron W. F. Lee, Henry P. Moreton, Hugues Hoppe","doi":"10.1145/344779.344829","DOIUrl":"https://doi.org/10.1145/344779.344829","url":null,"abstract":"In this paper we introduce a new surface representing, the displaced subdivision surface. It represents a detailed surface model as a scalar-valued displacement over a smooth domain surface. Our representation defines both the domain surface and the displacement function using a unified subdivision framework, allowing for simple and efficient evaluation of analytic surface properties. We present a simple, automatic scheme for converting detailed geometric models into such a representation. The challenge in this conversion process is to find a simple subdivision surface that still faithfully expresses the detailed model as its offset. We demonstrate that displaced subdivision surfaces offer a number of benefits, including geometry compression, editing, animation, scalability, and adaptive rendering. In particular, the encoding of fine detail as a scalar function makes the representation extremely compact.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115371390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 371
Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation 位姿空间变形:形状插值和骨架驱动变形的统一方法
J. P. Lewis, Matt Cordner, Nickson Fong
{"title":"Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation","authors":"J. P. Lewis, Matt Cordner, Nickson Fong","doi":"10.1145/344779.344862","DOIUrl":"https://doi.org/10.1145/344779.344862","url":null,"abstract":"Pose space deformation generalizes and improves upon both shape interpolation and common skeleton-driven deformation techniques. This deformation approach proceeds from the observation that several types of deformation can be uniformly represented as mappings from a pose space, defined by either an underlying skeleton or a more abstract system of parameters, to displacements in the object local coordinate frames. Once this uniform representation is identified, previously disparate deformation types can be accomplished within a single unified approach. The advantages of this algorithm include improved expressive power and direct manipulation of the desired shapes yet the performance associated with traditional shape interpolation is achievable. Appropriate applications include animation of facial and body deformation for entertainment, telepresence, computer gaming, and other applications where direct sculpting of deformations is desired or where real-time synthesis of a deforming model is required.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129402234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 983
Face fixer: compressing polygon meshes with properties 面固定:压缩多边形网格的属性
M. Isenburg, J. Snoeyink
{"title":"Face fixer: compressing polygon meshes with properties","authors":"M. Isenburg, J. Snoeyink","doi":"10.1145/344779.344919","DOIUrl":"https://doi.org/10.1145/344779.344919","url":null,"abstract":"Most schemes to compress the topology of a surface mesh have been developed for the lowest common denominator: triangulated meshes. We propose a scheme that handles the topology of arbitrary polygon meshes. It encodes meshes directly in their polygonal representation and extends to capture face groupings in a natural way. Avoiding the triangulation step we reduce the storage costs for typical polygon models that have group structures and property data.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130001992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 162
Interactive control for physically-based animation 基于物理的动画的交互式控制
Joseph Laszlo, M. V. D. Panne, E. Fiume
{"title":"Interactive control for physically-based animation","authors":"Joseph Laszlo, M. V. D. Panne, E. Fiume","doi":"10.1145/344779.344876","DOIUrl":"https://doi.org/10.1145/344779.344876","url":null,"abstract":"We propose the use of interactive, user-in-the-loop techniques for controlling physically-based animated characters. With a suitably designed interface, the continuous and discrete input actions afforded by a standard mouse and keyboard allow for the creation of a broad range of motions. We apply our techniques to interactively control planar dynamic simulations of a bounding cat, a gymnastic desk lamp, and a human character capable of walking, running, climbing, and various gymnastic behaviors. The interactive control techniques allows a performer's intuition and knowledge about motion planning to be readily exploited. Video games are the current target application of this work.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"517 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123100758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
The digital Michelangelo project: 3D scanning of large statues 数字米开朗基罗项目:大型雕像的3D扫描
M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, David Koller, Lucas Pereira, Matt Ginzton, Sean E. Anderson, James Davis, Jeremy Ginsberg, Jonathan Shade, Duane Fulk
{"title":"The digital Michelangelo project: 3D scanning of large statues","authors":"M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, David Koller, Lucas Pereira, Matt Ginzton, Sean E. Anderson, James Davis, Jeremy Ginsberg, Jonathan Shade, Duane Fulk","doi":"10.1145/344779.344849","DOIUrl":"https://doi.org/10.1145/344779.344849","url":null,"abstract":"We describe a hardware and software system for digitizing the shape and color of large fragile objects under non-laboratory conditions. Our system employs laser triangulation rangefinders, laser time-of-flight rangefinders, digital still cameras, and a suite of software for acquiring, aligning, merging, and viewing scanned data. As a demonstration of this system, we digitized 10 statues by Michelangelo, including the well-known figure of David, two building interiors, and all 1,163 extant fragments of the Forma Urbis Romae, a giant marble map of ancient Rome. Our largest single dataset is of the David - 2 billion polygons and 7,000 color images. In this paper, we discuss the challenges we faced in building this system, the solutions we employed, and the lessons we learned. We focus in particular on the unusual design of our laser triangulation scanner and on the algorithms and software we developed for handling very large scanned models.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124175209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1815
Environment matting extensions: towards higher accuracy and real-time capture 环境抠图扩展:朝着更高的精度和实时捕获
Yung-Yu Chuang, Douglas E. Zongker, J. Hindorff, B. Curless, D. Salesin, R. Szeliski
{"title":"Environment matting extensions: towards higher accuracy and real-time capture","authors":"Yung-Yu Chuang, Douglas E. Zongker, J. Hindorff, B. Curless, D. Salesin, R. Szeliski","doi":"10.1145/344779.344844","DOIUrl":"https://doi.org/10.1145/344779.344844","url":null,"abstract":"Environment matting is a generalization of traditional bluescreen matting. By photographing an object in front of a sequence of structured light backdrops, a set of approximate light-transport paths through the object can be computed. The original environment matting research chose a middle ground—using a moderate number of photographs to produce results that were reasonably accurate for many objects. In this work, we extend the technique in two opposite directions: recovering a more accurate model at the expense of using additional structured light backdrops, and obtaining a simplified matte using just a single backdrop. The first extension allows for the capture of complex and subtle interactions of light with objects, while the second allows for video capture of colorless objects in motion.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125730006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 155
Non-photorealistic virtual environments 非真实感的虚拟环境
A. W. Klein, Wilmot Li, M. Kazhdan, W. Corrêa, Adam Finkelstein, T. Funkhouser
{"title":"Non-photorealistic virtual environments","authors":"A. W. Klein, Wilmot Li, M. Kazhdan, W. Corrêa, Adam Finkelstein, T. Funkhouser","doi":"10.1145/344779.345075","DOIUrl":"https://doi.org/10.1145/344779.345075","url":null,"abstract":"We describe a system for non-photorealistic rendering (NPR) of virtual environments. In real time, it synthesizes imagery of architectural interiors using stroke-based textures. We address the four main challenges of such a system — interactivity, visual detail, controlled stroke size, and frame-to-frame coherence — through image based rendering (IBR) methods. In a preprocessing stage, we capture photos of a real or synthetic environment, map the photos to a coarse model of the environment, and run a series of NPR filters to generate textures. At runtime, the system re-renders the NPR textures over the geometry of the coarse model, and it adds dark lines that emphasize creases and silhouettes. We provide a method for constructing non-photorealistic textures from photographs that largely avoids seams in the resulting imagery. We also offer a new construction, art-maps, to control stroke size across the images. Finally, we show a working system that provides an immersive experience rendered in a variety of NPR styles.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132059778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 114
Deep shadow maps 深阴影贴图
Thomas Lokovic, Eric Veach
{"title":"Deep shadow maps","authors":"Thomas Lokovic, Eric Veach","doi":"10.1145/344779.344958","DOIUrl":"https://doi.org/10.1145/344779.344958","url":null,"abstract":"We introduce deep shadow maps, a technique that produces fast, high-quality shadows for primitives such as hair, fur, and smoke. Unlike traditional shadow maps, which store a single depth at each pixel, deep shadow maps store a representation of the fractional visibility through a pixel at all possible depths. Deep shadow maps have several advantages. First, they are prefiltered, which allows faster shadow lookups and much smaller memory footprints than regular shadow maps of similar quality. Second, they support shadows from partially transparent surfaces and volumetric objects such as fog. Third, they handle important cases of motion blur at no extra cost. The algorithm is simple to implement and can be added easily to existing renderers as an alternative to ordinary shadow maps.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133737986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 354
Normal meshes 正常的网格
I. Guskov, Kiril Vidimce, W. Sweldens, P. Schröder
{"title":"Normal meshes","authors":"I. Guskov, Kiril Vidimce, W. Sweldens, P. Schröder","doi":"10.1145/344779.344831","DOIUrl":"https://doi.org/10.1145/344779.344831","url":null,"abstract":"Normal meshes are new fundamental surface descriptions inspired by differential geometry. A normal mesh is a multiresolution mesh where each level can be written as a normal offset from a coarser version. Hence the mesh can be stored with a single float per vertex. We present an algorithm to approximate any surface arbitrarily closely with a normal semi-regular mesh. Normal meshes can be useful in numerous applications such as compression, filtering, rendering, texturing, and modeling.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121549618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 419
Style machines 风格的机器
M. Brand, Aaron Hertzmann
{"title":"Style machines","authors":"M. Brand, Aaron Hertzmann","doi":"10.1145/344779.344865","DOIUrl":"https://doi.org/10.1145/344779.344865","url":null,"abstract":"We approach the problem of stylistic motion synthesis by learning motion patterns from a highly varied set of motion capture sequences. Each sequence may have a distinct choreography, performed in a distinct sytle. Learning identifies common choreographic elements across sequences, the different styles in which each element is performed, and a small number of stylistic degrees of freedom which span the many variations in the dataset. The learned model can synthesize novel motion data in any interpolation or extrapolation of styles. For example, it can convert novice ballet motions into the more graceful modern dance of an expert. The model can also be driven by video, by scripts or even by noise to generate new choreography and synthesize virtual motion-capture in many styles.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"374 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121756238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 769
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信