ACM SIGGRAPH 2010 papers最新文献

筛选
英文 中文
Data-driven biped control 数据驱动的双足控制
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1781155
Yoonsang Lee, Sungeun Kim, Jehee Lee
{"title":"Data-driven biped control","authors":"Yoonsang Lee, Sungeun Kim, Jehee Lee","doi":"10.1145/1833349.1781155","DOIUrl":"https://doi.org/10.1145/1833349.1781155","url":null,"abstract":"We present a dynamic controller to physically simulate under-actuated three-dimensional full-body biped locomotion. Our data-driven controller takes motion capture reference data to reproduce realistic human locomotion through realtime physically based simulation. The key idea is modulating the reference trajectory continuously and seamlessly such that even a simple dynamic tracking controller can follow the reference trajectory while maintaining its balance. In our framework, biped control can be facilitated by a large array of existing data-driven animation techniques because our controller can take a stream of reference data generated on-the-fly at runtime. We demonstrate the effectiveness of our approach through examples that allow bipeds to turn, spin, and walk while steering its direction interactively.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128791388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 194
Diffusion coded photography for extended depth of field 扩展景深的扩散编码摄影
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778768
O. Cossairt, Changyin Zhou, S. Nayar
{"title":"Diffusion coded photography for extended depth of field","authors":"O. Cossairt, Changyin Zhou, S. Nayar","doi":"10.1145/1833349.1778768","DOIUrl":"https://doi.org/10.1145/1833349.1778768","url":null,"abstract":"In recent years, several cameras have been introduced which extend depth of field (DOF) by producing a depth-invariant point spread function (PSF). These cameras extend DOF by deblurring a captured image with a single spatially-invariant PSF. For these cameras, the quality of recovered images depends both on the magnitude of the PSF spectrum (MTF) of the camera, and the similarity between PSFs at different depths. While researchers have compared the MTFs of different extended DOF cameras, relatively little attention has been paid to evaluating their depth invariances. In this paper, we compare the depth invariance of several cameras, and introduce a new camera that improves in this regard over existing designs, while still maintaining a good MTF. Our technique utilizes a novel optical element placed in the pupil plane of an imaging system. Whereas previous approaches use optical elements characterized by their amplitude or phase profile, our approach utilizes one whose behavior is characterized by its scattering properties. Such an element is commonly referred to as an optical diffuser, and thus we refer to our new approach as diffusion coding. We show that diffusion coding can be analyzed in a simple and intuitive way by modeling the effect of a diffuser as a kernel in light field space. We provide detailed analysis of diffusion coded cameras and show results from an implementation using a custom designed diffuser.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125341003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Session details: Fluids II 会议细节:流体II
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/3251984
{"title":"Session details: Fluids II","authors":"","doi":"10.1145/3251984","DOIUrl":"https://doi.org/10.1145/3251984","url":null,"abstract":"","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126876064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ambient point clouds for view interpolation 用于视图插值的环境点云
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778832
M. Goesele, J. Ackermann, Simon Fuhrmann, Carsten Haubold, Ronny Klowsky, Drew Steedly, R. Szeliski
{"title":"Ambient point clouds for view interpolation","authors":"M. Goesele, J. Ackermann, Simon Fuhrmann, Carsten Haubold, Ronny Klowsky, Drew Steedly, R. Szeliski","doi":"10.1145/1833349.1778832","DOIUrl":"https://doi.org/10.1145/1833349.1778832","url":null,"abstract":"View interpolation and image-based rendering algorithms often produce visual artifacts in regions where the 3D scene geometry is erroneous, uncertain, or incomplete. We introduce ambient point clouds constructed from colored pixels with uncertain depth, which help reduce these artifacts while providing non-photorealistic background coloring and emphasizing reconstructed 3D geometry. Ambient point clouds are created by randomly sampling colored points along the viewing rays associated with uncertain pixels. Our real-time rendering system combines these with more traditional rigid 3D point clouds and colored surface meshes obtained using multiview stereo. Our resulting system can handle larger-range view transitions with fewer visible artifacts than previous approaches.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131742491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
By-example synthesis of architectural textures 建筑肌理的实例合成
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778821
S. Lefebvre, S. Hornus, A. Lasram
{"title":"By-example synthesis of architectural textures","authors":"S. Lefebvre, S. Hornus, A. Lasram","doi":"10.1145/1833349.1778821","DOIUrl":"https://doi.org/10.1145/1833349.1778821","url":null,"abstract":"Textures are often reused on different surfaces in large virtual environments. This leads to unpleasing stretch and cropping of features when textures contain architectural elements. Existing retargeting methods could adapt each texture to the size of their support surface, but this would imply storing a different image for each and every surface, saturating memory. Our new texture synthesis approach casts synthesis as a shortest path problem in a graph describing the space of images that can be synthesized. Each path in the graph describes how to form a new image by cutting strips of the source image and reassembling them in a different order. Only the paths describing the result need to be stored in memory: synthesized textures are reconstructed at rendering time. The user can control repetition of features, and may specify positional constraints. We demonstrate our approach on a variety of textures, from facades for large city rendering to structured textures commonly used in video games.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132742534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
envyLight: an interface for editing natural illumination envyLight:一个编辑自然光照的界面
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778771
F. Pellacini
{"title":"envyLight: an interface for editing natural illumination","authors":"F. Pellacini","doi":"10.1145/1833349.1778771","DOIUrl":"https://doi.org/10.1145/1833349.1778771","url":null,"abstract":"Scenes lit with high dynamic range environment maps of real-world environments exhibit all the complex nuances of natural illumination. For applications that need lighting adjustments to the rendered images, editing environment maps directly is still cumbersome. First, designers have to determine which region in the environment map is responsible for the specific lighting feature (e.g. diffuse gradients, highlights and shadows) they desire to edit. Second, determining the parameters of image-editing operations needed to achieve specific changes to the selected lighting feature requires extensive trial-and-error. This paper presents envyLight, an interactive interface for editing natural illumination that combines an algorithm to select environment map regions, by sketching strokes on lighting features in the rendered image, with a small set of editing operations to quickly adjust the selected feature. The envyLight selection algorithm works well for indoor and outdoor lighting corresponding to rendered images where lighting features vary widely in number, size, contrast and edge blur. Furthermore, envyLight selection is general with respect to material type, from matte to sharp glossy, and the complexity of scenes' shapes. envyLight editing operations allow designers to quickly alter the position, contrast and edge blur of the selected lighting feature and can be keyframed to support animation.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130319400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
SmartBoxes for interactive urban reconstruction 交互式城市改造的智能盒子
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778830
L. Nan, Andrei Sharf, Hao Zhang, D. Cohen-Or, Baoquan Chen
{"title":"SmartBoxes for interactive urban reconstruction","authors":"L. Nan, Andrei Sharf, Hao Zhang, D. Cohen-Or, Baoquan Chen","doi":"10.1145/1833349.1778830","DOIUrl":"https://doi.org/10.1145/1833349.1778830","url":null,"abstract":"We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. These boxes quickly snap to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense that their locations and sizes are automatically adjusted on-the-fly to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two snapping forces defined respectively by a data-fitting term and a contextual term, which together assist the user in reconstructing the architectural model from a sparse and noisy point cloud. We show that a combination of the user's interactive guidance and high-level knowledge about the semantics of the underlying model, together with the snapping forces, allows the reconstruction of structures which are partially or even completely missing from the input.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124697695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 185
Learning 3D mesh segmentation and labeling 学习三维网格分割和标记
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778839
E. Kalogerakis, Aaron Hertzmann, Karan Singh
{"title":"Learning 3D mesh segmentation and labeling","authors":"E. Kalogerakis, Aaron Hertzmann, Karan Singh","doi":"10.1145/1833349.1778839","DOIUrl":"https://doi.org/10.1145/1833349.1778839","url":null,"abstract":"This paper presents a data-driven approach to simultaneous segmentation and labeling of parts in 3D meshes. An objective function is formulated as a Conditional Random Field model, with terms assessing the consistency of faces with labels, and terms between labels of neighboring faces. The objective function is learned from a collection of labeled training meshes. The algorithm uses hundreds of geometric and contextual label features and learns different types of segmentations for different tasks, without requiring manual parameter tuning. Our algorithm achieves a significant improvement in results over the state-of-the-art when evaluated on the Princeton Segmentation Benchmark, often producing segmentations and labelings comparable to those produced by humans.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121698898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 560
Sampling-based contact-rich motion control 基于采样的富接触运动控制
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778865
Libin Liu, KangKang Yin, M. V. D. Panne, Tianjia Shao, Weiwei Xu
{"title":"Sampling-based contact-rich motion control","authors":"Libin Liu, KangKang Yin, M. V. D. Panne, Tianjia Shao, Weiwei Xu","doi":"10.1145/1833349.1778865","DOIUrl":"https://doi.org/10.1145/1833349.1778865","url":null,"abstract":"Human motions are the product of internal and external forces, but these forces are very difficult to measure in a general setting. Given a motion capture trajectory, we propose a method to reconstruct its open-loop control and the implicit contact forces. The method employs a strategy based on randomized sampling of the control within user-specified bounds, coupled with forward dynamics simulation. Sampling-based techniques are well suited to this task because of their lack of dependence on derivatives, which are difficult to estimate in contact-rich scenarios. They are also easy to parallelize, which we exploit in our implementation on a compute cluster. We demonstrate reconstruction of a diverse set of captured motions, including walking, running, and contact rich tasks such as rolls and kip-up jumps. We further show how the method can be applied to physically based motion transformation and retargeting, physically plausible motion variations, and reference-trajectory-free idling motions. Alongside the successes, we point out a number of limitations and directions for future work.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123879472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 168
Structure-based ASCII art 基于结构的ASCII艺术
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778789
Xuemiao Xu, Linling Zhang, T. Wong
{"title":"Structure-based ASCII art","authors":"Xuemiao Xu, Linling Zhang, T. Wong","doi":"10.1145/1833349.1778789","DOIUrl":"https://doi.org/10.1145/1833349.1778789","url":null,"abstract":"The wide availability and popularity of text-based communication channels encourage the usage of ASCII art in representing images. Existing tone-based ASCII art generation methods lead to halftone-like results and require high text resolution for display, as higher text resolution offers more tone variety. This paper presents a novel method to generate structure-based ASCII art that is currently mostly created by hand. It approximates the major line structure of the reference image content with the shape of characters. Representing the unlimited image content with the extremely limited shapes and restrictive placement of characters makes this problem challenging. Most existing shape similarity metrics either fail to address the misalignment in real-world scenarios, or are unable to account for the differences in position, orientation and scaling. Our key contribution is a novel alignment-insensitive shape similarity (AISS) metric that tolerates misalignment of shapes while accounting for the differences in position, orientation and scaling. Together with the constrained deformation approach, we formulate the ASCII art generation as an optimization that minimizes shape dissimilarity and deformation. Convincing results and user study are shown to demonstrate its effectiveness.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129781733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信