Matthias Zwicker, W. Matusik, F. Durand, H. Pfister
{"title":"Antialiasing for automultiscopic 3D displays","authors":"Matthias Zwicker, W. Matusik, F. Durand, H. Pfister","doi":"10.2312/EGWR/EGSR06/073-082","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/073-082","url":null,"abstract":"Automultiscopic displays show stereoscopic images that can be viewed from any viewpoint without special glasses. They hold great promise for the future of television and digital entertainment. However, the image quality on these 3D displays is currently not sufficient to appeal to the mass market. In this paper, we extend the frequency analysis of light fields to address some of the major issues in 3D cinematography for automultiscopic displays. First, we derive the bandwidth of 3D displays using ray-space analysis, and we introduce a method to quantify display depth of field. We show that this approach provides solid foundations to analyze and distinguish various aspects of aliasing. We then present an anti-aliasing technique for automultiscopic displays by combining a reconstruction and a display prefilter. Next, we show how to reparameterize multi-view inputs to optimally match the depth of field of a display to improve the image quality. Finally, we present guidelines for 3D content acquisition, such as optimal multi-view camera configuration and placement.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123493931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Malzbender, Bennett Wilburn, Dan Gelb, Bill Ambrisco
{"title":"Surface enhancement using real-time photometric stereo and reflectance transformation","authors":"T. Malzbender, Bennett Wilburn, Dan Gelb, Bill Ambrisco","doi":"10.2312/EGWR/EGSR06/245-250","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/245-250","url":null,"abstract":"Photometric stereo recovers per-pixel estimates of surface orientation from images of a surface under varying lighting conditions. Transforming reflectance based on recovered normal directions is useful for enhancing the appearance of subtle surface detail. We present the first system that achieves real-time photometric stereo and reflectance transformation. A high-speed video camera, computer controlled light sources and fast GPU implementations of the algorithms enable both methods. We also present novel GPU-accelerated normal transformations before relighting that \"amplify\" shape detail. By applying standard image processing methods to our computed normal image, we can selectively enhance surface detail at different frequencies. Our system allows users in fields such as forensics, archeology and dermatology to investigate objects and surfaces by simply holding them in front of the camera. Real-time analysis of surface roughness for metrology can also be performed from the extracted normal field.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"45 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116387331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Segmentation-based 3D artistic rendering","authors":"A. Kolliopoulos, Jack M. Wang, Aaron Hertzmann","doi":"10.2312/EGWR/EGSR06/361-370","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/361-370","url":null,"abstract":"This paper introduces segmentation-based 3D non-photorealistic rendering, in which 3D scenes are rendered as a collection of 2D image segments. Segments abstract out unnecessary detail and provide a basis for defining new rendering styles. These segments are computed by a spectral clustering algorithm that incorporates 3D information, including depth, user-defined importance, and object grouping. Temporally coherent animation is created by biasing adjacent frames to have similar segmentations. We describe algorithms for rendering segments in styles inspired by a number of hand-painted images.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128214147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Einarsson, Charles-Félix Chabert, Andrew Jones, Wan-Chun Ma, B. Lamond, Tim Hawkins, M. Bolas, Sebastian Sylwan, P. Debevec
{"title":"Relighting human locomotion with flowed reflectance fields","authors":"P. Einarsson, Charles-Félix Chabert, Andrew Jones, Wan-Chun Ma, B. Lamond, Tim Hawkins, M. Bolas, Sebastian Sylwan, P. Debevec","doi":"10.2312/EGWR/EGSR06/183-194","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/183-194","url":null,"abstract":"We present an image-based approach for capturing the appearance of a walking or running person so they can be rendered realistically under variable viewpoint and illumination. In our approach, a person walks on a treadmill at a regular rate as a turntable slowly rotates the person's direction. As this happens, the person is filmed with a vertical array of high-speed cameras under a time-multiplexed lighting basis, acquiring a seven-dimensional dataset of the person under variable time, illumination, and viewing direction in approximately forty seconds. We process this data into a flowed reflectance field using an optical flow algorithm to correspond pixels in neighboring camera views and time samples to each other, and we use image compression to reduce the size of this data. We then use image-based relighting and a hardware-accelerated combination of view morphing and light field rendering to render the subject under user-specified viewpoint and lighting conditions. To composite the person into a scene, we use an alpha channel derived from back lighting and a retroreflective treadmill surface and a visual hull process to render the shadows the person would cast onto the ground. We demonstrate realistic composites of several subjects into real and virtual environments using our technique.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133413343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Todor Georgiev, K. Zheng, B. Curless, D. Salesin, S. Nayar, C. Intwala
{"title":"Spatio-angular resolution tradeoffs in integral photography","authors":"Todor Georgiev, K. Zheng, B. Curless, D. Salesin, S. Nayar, C. Intwala","doi":"10.2312/EGWR/EGSR06/263-272","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/263-272","url":null,"abstract":"An integral camera samples the 4D light field of a scene within a single photograph. This paper explores the fundamental tradeoff between spatial resolution and angular resolution that is inherent to integral photography. Based on our analysis we divide previous integral camera designs into two classes depending on how the 4D light field is distributed (multiplexed) over the 2D sensor. Our optical treatment is mathematically rigorous and extensible to the broader area of light field research.\u0000 We argue that for many real-world scenes it is beneficial to sacrifice angular resolution for higher spatial resolution. The missing angular resolution is then interpolated using techniques from computer vision. We have developed a prototype integral camera that uses a system of lenses and prisms as an external attachment to a conventional camera. We have used this prototype to capture the light fields of a variety of scenes. We show examples of novel view synthesis and refocusing where the spatial resolution is significantly higher than is possible with previous designs.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114766099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lincoln Ritter, Wilmot Li, B. Curless, Maneesh Agrawala, D. Salesin
{"title":"Painting with texture","authors":"Lincoln Ritter, Wilmot Li, B. Curless, Maneesh Agrawala, D. Salesin","doi":"10.2312/EGWR/EGSR06/371-376","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/371-376","url":null,"abstract":"We present an interactive texture painting system that allows the user to author digital images by painting with a palette of input textures. At the core of our system is an interactive texture synthesis algorithm that generates textures with natural-looking boundary effects and alpha information as the user paints. Furthermore, we describe an intuitive layered painting model that allows strokes of texture to be merged, intersected and overlapped while maintaining the appropriate boundaries between texture regions. We demonstrate the utility and expressiveness of our system by painting several images using textures that exhibit a range of different boundary effects.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123549115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive screen-space accurate photon tracing on GPUs","authors":"J. Krüger, K. Bürger, R. Westermann","doi":"10.2312/EGWR/EGSR06/319-329","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/319-329","url":null,"abstract":"Recent advances in algorithms and graphics hardware have opened the possibility to render caustics at interactive rates on commodity PCs. This paper extends on this work in that it presents a new method to directly render caustics on complex objects, to compute one or several refractions at such objects and to simulate caustics shadowing. At the core of our method is the idea to avoid the construction of photon maps by tracing photons in screen-space on programmable graphics hardware. Our algorithm is based on the rasterization of photon paths into texture maps. Intersection events are then resolved on a per-fragment basis using layered depth images. To correctly spread photon energy in screen-space we render aligned point sprites at the diffuse receivers where photons terminate. As our method does neither require any pre-processing nor an intermediate radiance representation it can efficiently deal with dynamic scenery and scenery that is modified, or even created on the GPU.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121313281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tomographic reconstruction of transparent objects","authors":"B. Trifonov, D. Bradley, W. Heidrich","doi":"10.2312/EGWR/EGSR06/051-060","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/051-060","url":null,"abstract":"The scanning of 3D geometry has become a popular way of capturing the shape of real-world objects. Transparent objects, however, pose problems for traditional scanning methods. We present a visible light tomographic reconstruction method for recovering the shape of transparent objects, such as glass. Our setup is relatively simple to implement, and accounts for refraction, which can be a significant problem in visible light tomography.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133909785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image-driven navigation of analytical BRDF models","authors":"A. Ngan, F. Durand, W. Matusik","doi":"10.2312/EGWR/EGSR06/399-407","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/399-407","url":null,"abstract":"Specifying parameters of analytic BRDF models is a difficult task as these parameters are often not intuitive for artists and their effect on appearance can be non-uniform. Ideally, a given step in the parameter space should produce a predictable and perceptually-uniform change in the rendered image. Systems that employ psychophysics have produced important advances in this direction; however, the requirement of user studies limits scalability of these approaches. In this work, we propose a new and intuitive method for designing material appearance. First, we define a computational metric between BRDFs that is based on rendered images of a scene under natural illumination. We show that our metric produces results that agree with previous perceptual studies. Next, we propose a user interface that allows for navigation in the remapped parameter space of a given BRDF model. For the current settings of the BRDF parameters, we display a choice of variations corresponding to uniform steps according to our metric, in the various parameter directions. In addition to the parametric navigation for a single model, we also support neighborhood navigation in the space of all models. By clustering a large number of neighbors and removing neighbors that are close to the current model, the user can easily visualize the alternate effects that can only be expressed with other models. We show that our interface is simple and intuitive. Furthermore, visual navigation in the BRDF space both in the local model and the union space is an effective way for reflectance design.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114643046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Instant ray tracing: the bounding interval hierarchy","authors":"Carsten Wächter, A. Keller","doi":"10.2312/EGWR/EGSR06/139-149","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/139-149","url":null,"abstract":"We introduce a new ray tracing algorithm that exploits the best of previous methods: Similar to bounding volume hierarchies the memory of the acceleration data structure is linear in the number of objects to be ray traced and can be predicted prior to construction, while the traversal of the hierarchy is as efficient as the one of kd-trees. The construction algorithm can be considered a variant of quicksort and for the first time is based on a global space partitioning heuristic, which is much cheaper to evaluate than the classic surface area heuristic. Compared to spatial partitioning schemes only a fraction of the memory is used and a higher numerical precision is intrinsic. The new method is simple to implement and its high performance is demonstrated by extensive measurements including massive as well as dynamic scenes, where we focus on the total time to image including the construction cost rather than on only frames per second.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130730449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}