ACM SIGGRAPH 2010 papers最新文献

筛选
英文 中文
A coaxial optical scanner for synchronous acquisition of 3D geometry and surface reflectance 用于同步获取三维几何和表面反射率的同轴光学扫描仪
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778836
M. Holroyd, Jason Lawrence, Todd E. Zickler
{"title":"A coaxial optical scanner for synchronous acquisition of 3D geometry and surface reflectance","authors":"M. Holroyd, Jason Lawrence, Todd E. Zickler","doi":"10.1145/1833349.1778836","DOIUrl":"https://doi.org/10.1145/1833349.1778836","url":null,"abstract":"We present a novel optical setup and processing pipeline for measuring the 3D geometry and spatially-varying surface reflectance of physical objects. Central to our design is a digital camera and a high frequency spatially-modulated light source aligned to share a common focal point and optical axis. Pairs of such devices allow capturing a sequence of images from which precise measurements of geometry and reflectance can be recovered. Our approach is enabled by two technical contributions: a new active multiview stereo algorithm and an analysis of light descattering that has important implications for image-based reflectometry. We show that the geometry measured by our scanner is accurate to within 50 microns at a resolution of roughly 200 microns and that the reflectance agrees with reference data to within 5.5%. Additionally, we present an image relighting application and show renderings that agree very well with reference images at light and view positions far from those that were initially measured.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115603068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 134
Camouflage images 伪装图像
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778788
Hung-Kuo Chu, Wei-Hsin Hsu, Niloy J. Mitra, Daniel Cohen-Or, T. Wong, Tong-Yee Lee
{"title":"Camouflage images","authors":"Hung-Kuo Chu, Wei-Hsin Hsu, Niloy J. Mitra, Daniel Cohen-Or, T. Wong, Tong-Yee Lee","doi":"10.1145/1833349.1778788","DOIUrl":"https://doi.org/10.1145/1833349.1778788","url":null,"abstract":"Camouflage images contain one or more hidden figures that remain imperceptible or unnoticed for a while. In one possible explanation, the ability to delay the perception of the hidden figures is attributed to the theory that human perception works in two main phases: feature search and conjunction search. Effective camouflage images make feature based recognition difficult, and thus force the recognition process to employ conjunction search, which takes considerable effort and time. In this paper, we present a technique for creating camouflage images. To foil the feature search, we remove the original subtle texture details of the hidden figures and replace them by that of the surrounding apparent image. To leave an appropriate degree of clues for the conjunction search, we compute and assign new tones to regions in the embedded figures by performing an optimization between two conflicting terms, which we call immersion and standout, corresponding to hiding and leaving clues, respectively. We show a large number of camouflage images generated by our technique, with or without user guidance. We have tested the quality of the images in an extensive user study, showing a good control of the difficulty levels.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115917163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Optimal feedback control for character animation using an abstract model 基于抽象模型的角色动画最优反馈控制
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778811
Yuting Ye, C. Liu
{"title":"Optimal feedback control for character animation using an abstract model","authors":"Yuting Ye, C. Liu","doi":"10.1145/1833349.1778811","DOIUrl":"https://doi.org/10.1145/1833349.1778811","url":null,"abstract":"Real-time adaptation of a motion capture sequence to virtual environments with physical perturbations requires robust control strategies. This paper describes an optimal feedback controller for motion tracking that allows for on-the-fly re-planning of long-term goals and adjustments in the final completion time. We first solve an offline optimal trajectory problem for an abstract dynamic model that captures the essential relation between contact forces and momenta. A feedback control policy is then derived and used to simulate the abstract model online. Simulation results become dynamic constraints for online reconstruction of full-body motion from a reference. We applied our controller to a wide range of motions including walking, long stepping, and a squat exercise. Results show that our controllers are robust to large perturbations and changes in the environment.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123912725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 124
The Frankencamera: an experimental platform for computational photography 弗兰肯相机:计算摄影的实验平台
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778766
Andrew Adams, Eino-Ville Talvala, S. H. Park, David E. Jacobs, B. Ajdin, Natasha Gelfand, Jennifer Dolson, D. Vaquero, Jongmin Baek, M. Tico, H. Lensch, W. Matusik, K. Pulli, M. Horowitz, M. Levoy
{"title":"The Frankencamera: an experimental platform for computational photography","authors":"Andrew Adams, Eino-Ville Talvala, S. H. Park, David E. Jacobs, B. Ajdin, Natasha Gelfand, Jennifer Dolson, D. Vaquero, Jongmin Baek, M. Tico, H. Lensch, W. Matusik, K. Pulli, M. Horowitz, M. Levoy","doi":"10.1145/1833349.1778766","DOIUrl":"https://doi.org/10.1145/1833349.1778766","url":null,"abstract":"Although there has been much interest in computational photography within the research and photography communities, progress has been hampered by the lack of a portable, programmable camera with sufficient image quality and computing power. To address this problem, we have designed and implemented an open architecture and API for such cameras: the Frankencamera. It consists of a base hardware specification, a software stack based on Linux, and an API for C++. Our architecture permits control and synchronization of the sensor and image processing pipeline at the microsecond time scale, as well as the ability to incorporate and synchronize external hardware like lenses and flashes. This paper specifies our architecture and API, and it describes two reference implementations we have built. Using these implementations we demonstrate six computational photography applications: HDR viewfinding and capture, low-light viewfinding and capture, automated acquisition of extended dynamic range panoramas, foveal imaging, IMU-based hand shake detection, and rephotography. Our goal is to standardize the architecture and distribute Frankencameras to researchers and students, as a step towards creating a community of photographer-programmers who develop algorithms, applications, and hardware for computational cameras.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117206933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Reliefs as images 浮雕作为图像
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778797
M. Alexa, W. Matusik
{"title":"Reliefs as images","authors":"M. Alexa, W. Matusik","doi":"10.1145/1833349.1778797","DOIUrl":"https://doi.org/10.1145/1833349.1778797","url":null,"abstract":"We describe how to create relief surfaces whose diffuse reflection approximates given images under known directional illumination. This allows using any surface with a significant diffuse reflection component as an image display. We propose a discrete model for the area in the relief surface that corresponds to a pixel in the desired image. This model introduces the necessary degrees of freedom to overcome theoretical limitations in shape from shading and practical requirements such as stability of the image under changes in viewing condition and limited overall variation in depth. The discrete surface is determined using an iterative least squares optimization. We show several resulting relief surfaces conveying one image for varying lighting directions as well as two images for two specific lighting directions.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123430585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 109
Dynamic video narratives 动态视频叙事
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778825
Carlos D. Correa, K. Ma
{"title":"Dynamic video narratives","authors":"Carlos D. Correa, K. Ma","doi":"10.1145/1833349.1778825","DOIUrl":"https://doi.org/10.1145/1833349.1778825","url":null,"abstract":"This paper presents a system for generating dynamic narratives from videos. These narratives are characterized for being compact, coherent and interactive, as inspired by principles of sequential art. Narratives depict the motion of one or several actors over time. Creating compact narratives is challenging as it is desired to combine the video frames in a way that reuses redundant backgrounds and depicts the stages of a motion. In addition, previous approaches focus on the generation of static summaries and can afford expensive image composition techniques. A dynamic narrative, on the other hand, must be played and skimmed in real-time, which imposes certain cost limitations in the video processing. In this paper, we define a novel process to compose foreground and background regions of video frames in a single interactive image using a series of spatio-temporal masks. These masks are created to improve the output of automatic video processing techniques such as image stitching and foreground segmentation. Unlike hand-drawn narratives, often limited to static representations, the proposed system allows users to explore the narrative dynamically and produce different representations of motion. We have built an authoring system that incorporates these methods and demonstrated successful results on a number of video clips. The authoring system can be used to create interactive posters of video clips, browse video in a compact manner or highlight a motion sequence in a movie.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129576708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Gesture controllers 姿态控制器
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778861
S. Levine, Philipp Krähenbühl, S. Thrun, V. Koltun
{"title":"Gesture controllers","authors":"S. Levine, Philipp Krähenbühl, S. Thrun, V. Koltun","doi":"10.1145/1833349.1778861","DOIUrl":"https://doi.org/10.1145/1833349.1778861","url":null,"abstract":"We introduce gesture controllers, a method for animating the body language of avatars engaged in live spoken conversation. A gesture controller is an optimal-policy controller that schedules gesture animations in real time based on acoustic features in the user's speech. The controller consists of an inference layer, which infers a distribution over a set of hidden states from the speech signal, and a control layer, which selects the optimal motion based on the inferred state distribution. The inference layer, consisting of a specialized conditional random field, learns the hidden structure in body language style and associates it with acoustic features in speech. The control layer uses reinforcement learning to construct an optimal policy for selecting motion clips from a distribution over the learned hidden states. The modularity of the proposed method allows customization of a character's gesture repertoire, animation of non-human characters, and the use of additional inputs such as speech recognition or direct user control.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 152
Session details: Perception, presence & animation 会议细节:感知,存在和动画
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/3252004
{"title":"Session details: Perception, presence & animation","authors":"","doi":"10.1145/3252004","DOIUrl":"https://doi.org/10.1145/3252004","url":null,"abstract":"","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129043079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physical reproduction of materials with specified subsurface scattering 具有特定次表面散射的材料的物理再现
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778798
Miloš Hašan, M. Fuchs, W. Matusik, H. Pfister, S. Rusinkiewicz
{"title":"Physical reproduction of materials with specified subsurface scattering","authors":"Miloš Hašan, M. Fuchs, W. Matusik, H. Pfister, S. Rusinkiewicz","doi":"10.1145/1833349.1778798","DOIUrl":"https://doi.org/10.1145/1833349.1778798","url":null,"abstract":"We investigate a complete pipeline for measuring, modeling, and fabricating objects with specified subsurface scattering behaviors. The process starts with measuring the scattering properties of a given set of base materials, determining their radial reflection and transmission profiles. We describe a mathematical model that predicts the profiles of different stackings of base materials, at arbitrary thicknesses. In an inverse process, we can then specify a desired reflection profile and compute a layered composite material that best approximates it. Our algorithm efficiently searches the space of possible combinations of base materials, pruning unsatisfactory states imposed by physical constraints. We validate our process by producing both homogeneous and heterogeneous composites fabricated using a multi-material 3D printer. We demonstrate reproductions that have scattering properties approximating complex materials.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122365167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 142
Apparent display resolution enhancement for moving images 运动图像的明显显示分辨率增强
ACM SIGGRAPH 2010 papers Pub Date : 2010-07-26 DOI: 10.1145/1833349.1778850
P. Didyk, E. Eisemann, Tobias Ritschel, K. Myszkowski, H. Seidel
{"title":"Apparent display resolution enhancement for moving images","authors":"P. Didyk, E. Eisemann, Tobias Ritschel, K. Myszkowski, H. Seidel","doi":"10.1145/1833349.1778850","DOIUrl":"https://doi.org/10.1145/1833349.1778850","url":null,"abstract":"Limited spatial resolution of current displays makes the depiction of very fine spatial details difficult. This work proposes a novel method applied to moving images that takes into account the human visual system and leads to an improved perception of such details. To this end, we display images rapidly varying over time along a given trajectory on a high refresh rate display. Due to the retinal integration time the information is fused and yields apparent super-resolution pixels on a conventional-resolution display. We discuss how to find optimal temporal pixel variations based on linear eye-movement and image content and extend our solution to arbitrary trajectories. This step involves an efficient method to predict and successfully treat potentially visible flickering. Finally, we evaluate the resolution enhancement in a perceptual study that shows that significant improvements can be achieved both for computer generated images and photographs.","PeriodicalId":132490,"journal":{"name":"ACM SIGGRAPH 2010 papers","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130496289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信