Proceedings of the 27th annual conference on Computer graphics and interactive techniques最新文献

筛选
英文 中文
Tangible interaction + graphical interpretation: a new approach to 3D modeling 有形交互+图形解释:3D建模的新方法
David Anderson, James L. Frankel, J. Marks, A. Agarwala, P. Beardsley, J. Hodgins, D. Leigh, Kathy Ryall, E. Sullivan, J. Yedidia
{"title":"Tangible interaction + graphical interpretation: a new approach to 3D modeling","authors":"David Anderson, James L. Frankel, J. Marks, A. Agarwala, P. Beardsley, J. Hodgins, D. Leigh, Kathy Ryall, E. Sullivan, J. Yedidia","doi":"10.1145/344779.344960","DOIUrl":"https://doi.org/10.1145/344779.344960","url":null,"abstract":"Construction toys are a superb medium for geometric models. We argue that such toys, suitably instrumented or sensed, could be the inspiration for a new generation of easy-to-use, tangible modeling systems—especially if the tangible modeling is combined with graphical-interpretation techniques for enhancing nascent models automatically. The three key technologies needed to realize this idea are embedded computation, vision-based acquisition, and graphical interpretation. We sample these technologies in the context of two novel modeling systems: physical building blocks that self-describe, interpret, and decorate the structures into which they are assembled; and a system for scanning, interpreting, and animating clay figures.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115711734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 184
Escherization
C. Kaplan, D. Salesin
{"title":"Escherization","authors":"C. Kaplan, D. Salesin","doi":"10.1145/344779.345022","DOIUrl":"https://doi.org/10.1145/344779.345022","url":null,"abstract":"This paper introduces and presents a solution to the “Escherization” problem: given a closed figure in the plane, find a new closed figure that is similar to the original and tiles the plane. Our solution works by using a simulated annealer to optimize over a parameterization of the “isohedral” tilings, a class of tilings that us flexible enough to encompass nearly all of Escher's own tilings, and yet simple enough to be encoded and explored by a computer. We also describe a representation for isohedral tilings that allows for highly interactive viewing and rendering. We demonstrate the use of these tools—along with several additional techniques for adding decorations to tilings—with a variety of original ornamental designs.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115881743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Patching Catmull-Clark meshes 补Catmull-Clark网格
J. Peters
{"title":"Patching Catmull-Clark meshes","authors":"J. Peters","doi":"10.1145/344779.344908","DOIUrl":"https://doi.org/10.1145/344779.344908","url":null,"abstract":"Named after the title, the PCCM transformation is a simple, explicit algorithm that creates large, smoothly joining bicubic Nurbs patches from a refined Catmull-Clark subdivision mesh. The resulting patches are maximally large in the sense that one patch corresponds to one quadrilateral facet of the initial, coarsest quadrilateral mesh before subdivision. The patches join parametrically C2 and agree with the Catmull-Clark limit surface except in the immediate neighborhood of extraordinary mesh nodes; in such a neighborhood they join at least with tangent continuity and interpolate the limit of the extraordinary mesh node. The PCCM transformation integrates naturally with array-based implementations of subdivision surfaces.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"12 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124911504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
Plenoptic sampling Plenoptic抽样
Jinxiang Chai, S. Chan, H. Shum, Xin Tong
{"title":"Plenoptic sampling","authors":"Jinxiang Chai, S. Chan, H. Shum, Xin Tong","doi":"10.1145/344779.344932","DOIUrl":"https://doi.org/10.1145/344779.344932","url":null,"abstract":"This paper studies the problem of plenoptic sampling in image-based rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. The spectral support of a light field signal is bounded by the minimum and maximum depths only, no matter how complicated the spectral support might be because of depth variations in the scene. The minimum sampling rate for light field rendering is obtained by compacting the replicas of the spectral support of the sampled light field within the smallest interval. Given the minimum and maximum depths, a reconstruction filter with an optimal and constant depth can be designed to achieve anti-aliased light field rendering. Plenoptic sampling goes beyond the minimum number of images needed for anti-aliased light field rendering. More significantly, it utilizes the scene depth information to determine the minimum sampling curve in the joint image and geometry space. The minimum sampling curve quantitatively describes the relationship among three key elements in IBR systems: scene complexity (geometrical and textural information), the number of image samples, and the output resolution. Therefore, plenoptic sampling bridges the gap between image-based rendering and traditional geometry-based rendering. Experimental results demonstrate the effectiveness of our approach.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128809476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 744
Conservative visibility preprocessing using extended projections 使用扩展投影的保守可见性预处理
F. Durand, G. Drettakis, J. Thollot, C. Puech
{"title":"Conservative visibility preprocessing using extended projections","authors":"F. Durand, G. Drettakis, J. Thollot, C. Puech","doi":"10.1145/344779.344891","DOIUrl":"https://doi.org/10.1145/344779.344891","url":null,"abstract":"Visualization of very complex scenes can be significantly accelerated using occlusion culling. In this paper we present a visibility preprocessing method which efficiently computes potentially visible geometry for volumetric viewing cells. We introduce novel extended projection operators, which permits efficient and conservative occlusion culling with respect to all viewpoints within a cell, and takes into account the combined occlusion effect of multiple occluders. We use extended projection of occluders onto a set of projection planes to create extended occlusion maps; we show how to efficiently test occludees against these occlusion maps to determine occlusion with respect to the entire cell. We also present an improved projection operator for certain specific but important configurations. An important advantage of our approach is that we can re-project extended projections onto a series of projection planes (via an occlusion sweep), and accumulate occlusion information from multiple blockers. This new approach allows the creation of effective occlusion maps for previously hard-to-treat scenes such as leaves of trees in a forest. Graphics hardware is used to accelerate both the extended projection and reprojection operations. We present a complete implementation demonstrating significant speedup with respect to view-frustum culling only, without the computational overhead of on-line occlusion culling.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128721910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 169
Interactive manipulation of rigid body simulations 刚体仿真的交互式操作
J. Popović, S. Seitz, M. Erdmann, Zoran Popovic, A. Witkin
{"title":"Interactive manipulation of rigid body simulations","authors":"J. Popović, S. Seitz, M. Erdmann, Zoran Popovic, A. Witkin","doi":"10.1145/344779.344880","DOIUrl":"https://doi.org/10.1145/344779.344880","url":null,"abstract":"Physical simulation of dynamic objects has become commonplace in computer graphics because it produces highly realistic animations. In this paradigm the animator provides few physical parameters such as the objects' initial positions and velocities, and the simulator automatically generates realistic motions. The resulting motion, however, is difficult to control because even a small adjustment of the input parameters can drastically affect the subsequent motion. Furthermore, the animator often wishes to change the end-result of the motion instead of the initial physical parameters. We describe a novel interactive technique for intuitive manipulation of rigid multi-body simulations. Using our system, the animator can select bodies at any time and simply drag them to desired locations. In response, the system computes the required physical parameters and simulates the resulting motion. Surface characteristics such as normals and elasticity coefficients can also be automatically adjusted to provide a greater range of feasible motions, if the animator so desires. Because the entire simulation editing process runs at interactive speeds, the animator can rapidly design complex physical animations that would be difficult to achieve with existing rigid body simulators.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 192
QSplat: a multiresolution point rendering system for large meshes QSplat:用于大网格的多分辨率点渲染系统
S. Rusinkiewicz, M. Levoy
{"title":"QSplat: a multiresolution point rendering system for large meshes","authors":"S. Rusinkiewicz, M. Levoy","doi":"10.1145/344779.344940","DOIUrl":"https://doi.org/10.1145/344779.344940","url":null,"abstract":"Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size. We describe a system for representing and progressively displaying these meshes that combines a multiresolution hierarchy based on bounding spheres with a rendering system based on points. A single data structure is used for view frustum culling, backface culling, level-of-detail selection, and rendering. The representation is compact and can be computed quickly, making it suitable for large data sets. Our implementation, written for use in a large-scale 3D digitization project, launches quickly, maintains a user-settable interactive frame rate regardless of object complexity or camera position, yields reasonable image quality during motion, and refines progressively when idle to a high final image quality. We have demonstrated the system on scanned models containing hundreds of millions of samples.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129135596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1246
Silhouette clipping 轮廓剪裁
P. Sander, X. Gu, S. Gortler, Hugues Hoppe, John M. Snyder
{"title":"Silhouette clipping","authors":"P. Sander, X. Gu, S. Gortler, Hugues Hoppe, John M. Snyder","doi":"10.1145/344779.344935","DOIUrl":"https://doi.org/10.1145/344779.344935","url":null,"abstract":"Approximating detailed with coarse, texture-mapped meshes results in polygonal silhouettes. To eliminate this artifact, we introduce silhouette clipping, a framework for efficiently clipping the rendering of coarse geometry to the exact silhouette of the original model. The coarse mesh is obtained using progressive hulls, a novel representation with the nesting property required for proper clipping. We describe an improved technique for constructing texture and normal maps over this coarse mesh. Given a perspective view, silhouettes are efficiently extracted from the original mesh using a precomputed search tree. Within the tree, hierarchical culling is achieved using pairs of anchored cones. The extracted silhouette edges are used to set the hardware stencil buffer and alpha buffer, which in turn clip and antialias the rendered coarse geometry. Results demonstrate that silhouette clipping can produce renderings of similar quality to high-resolution meshes in less rendering time.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132353957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 225
As-rigid-as-possible shape interpolation 尽可能刚性的形状插值
M. Alexa, D. Cohen-Or, D. Levin
{"title":"As-rigid-as-possible shape interpolation","authors":"M. Alexa, D. Cohen-Or, D. Levin","doi":"10.1145/344779.344859","DOIUrl":"https://doi.org/10.1145/344779.344859","url":null,"abstract":"We present an object-space morphing technique that blends the interiors of given two- or three-dimensional shapes rather than their boundaries. The morph is rigid in the sense that local volumes are least-distorting as they vary from their source to target configurations. Given a boundary vertex correspondence, the source and target shapes are decomposed into isomorphic simplicial complexes. For the simplicial complexes, we find a closed-form expression allocating the paths of both boundary and interior vertices from source to target locations as a function of time. Key points are the identification of the optimal simplex morphing and the appropriate definition of an error functional whose minimization defines the paths of the vertices. Each pair of corresponding simplices defines an affine transformation, which is factored into a rotation and a stretching transformation. These local transformations are naturally interpolated over time and serve as the basis for composing a global coherent least-distorting transformation.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133931316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 570
Seamless texture mapping of subdivision surfaces by model pelting and texture blending 通过模型贴图和纹理混合实现细分表面的无缝纹理映射
Dan Piponi, G. Borshukov
{"title":"Seamless texture mapping of subdivision surfaces by model pelting and texture blending","authors":"Dan Piponi, G. Borshukov","doi":"10.1145/344779.344990","DOIUrl":"https://doi.org/10.1145/344779.344990","url":null,"abstract":"Subdivision surfaces solve numerous problems related to the geometry of character and animation models. However, unlike on parametrised surfaces there is no natural choice of texture coordinates on subdivision surfaces. Existing algorithms for generating texture coordinates on non-parametrised surfaces often find solutions that are locally acceptable but globally are unsuitable for use by artists wishing to paint textures. In addition, for topological reasons there is not necessarily any choice of assignment of texture coordinates to control points that can satisfactorily be interpolated over the entire surface. We introduce a technique, pelting, for finding both optimal and intuitive texture mapping over almost all of an entire subdivision surface and then show how to combine multiple texture mappings together to produce a seamless result.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126630557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 117
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信