{"title":"Pomegranate: a fully scalable graphics architecture","authors":"Matthew Eldridge, Homan Igehy, P. Hanrahan","doi":"10.1145/344779.344981","DOIUrl":"https://doi.org/10.1145/344779.344981","url":null,"abstract":"Pomegranate is a parallel hardware architecture for polygon rendering that provides scalable input bandwidth, triangle rate, pixel rate, texture memory and display bandwidth while maintaining an immediate-mode interface. The basic unit of scalability is a single graphics pipeline, and up to 64 such units may be combined. Pomegranate's scalability is achieved with a novel “sort-everywhere” architecture that distributes work in a balanced fashion at every stage of the pipeline, keeping the amount of work performed by each pipeline uniform as the system scales. Because of the balanced distribution, a scalable network based on high-speed point-to-point links can be used for communicating between the pipelines. Pomegranate uses the network to load balance triangle and fragment work independently, to provide a shared texture memory and to provide a scalable display system. The architecture provides one interface per pipeline for issuing ordered, immediate-mode rendering commands and supports a parallel API that allows multiprocessor applications to exactly order drawing commands from each interface. A detailed hardware simulation demonstrates performance on next-generation workloads. Pomegranate operates at 87-99% parallel efficiency with 64 pipelines, for a simulated performance of up to 1.10 billion triangles per second and 21.8 billion pixels per second.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124397057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video textures","authors":"Arno Schödl, R. Szeliski, D. Salesin, Irfan Essa","doi":"10.1145/344779.345012","DOIUrl":"https://doi.org/10.1145/344779.345012","url":null,"abstract":"This paper introduces a new type of medium, called a video texture, which has qualities somewhere between those of a photograph and a video. A video texture provides a continuous infinitely varying stream of images. While the individual frames of a video texture may be repeated from time to time, the video sequence as a whole is never repeated exactly. Video textures can be used in place of digital photos to infuse a static image with dynamic qualities and explicit actions. We present techniques for analyzing a video clip to extract its structure, and for synthesizing a new, similar looking video of arbitrary length. We combine video textures with view morphing techniques to obtain 3D video textures. We also introduce video-based animation, in which the synthesis of video textures can be guided by a user through high-level interactive controls. Applications of video textures and their extensions include the display of dynamic scenes on web pages, the creation of dynamic backdrops for special effects and games, and the interactive control of video-based animation.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122356355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An autostereoscopic display","authors":"K. Perlin, Salvatore Paxia, J. Kollin","doi":"10.1145/344779.344933","DOIUrl":"https://doi.org/10.1145/344779.344933","url":null,"abstract":"We present a display device which solves a long-standing problem: to give a true stereoscopic view of simulated objects, without artifacts, to a single unencumbered observer, while allowing the observer to freely change position and head rotation. Based on a novel combination of temporal and spatial multiplexing, this technique will enable artifact-free stereo to become a standard feature of display screens, without requiring the use of special eyewear. The availability of this technology may significantly impact CAD and CHI applications, as well as entertainment graphics. The underlying algorithms and system architecture are described, as well as hardware and software aspects of the implementation.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129460069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Pattanaik, J. Tumblin, Yangli Hector Yee, D. Greenberg
{"title":"Time-dependent visual adaptation for fast realistic image display","authors":"S. Pattanaik, J. Tumblin, Yangli Hector Yee, D. Greenberg","doi":"10.1145/344779.344810","DOIUrl":"https://doi.org/10.1145/344779.344810","url":null,"abstract":"Human vision takes time to adapt to large changes in scene intensity, and these transient adjustments have a profound effect on visual appearance. This paper offers a new operator to include these appearance changes in animations or interactive real-time simulations, and to match a user's visual responses to those the user would experience in a real-world scene. Large, abrupt changes in scene intensities can cause dramatic compression of visual responses, followed by a gradual recovery of normal vision. Asymmetric mechanisms govern these time-dependent adjustments, and offer adaptation to increased light that is much more rapid than adjustment to darkness. We derive a new tone reproduction operator that simulates these mechanisms. The operator accepts a stream of scene intensity frames and creates a stream of color display images. All operator components are derived from published quantitative measurements from physiology, psychophysics, color science, and photography. ept intentionally simple to allow fast computation, the operator is meant for use with real-time walk-through renderings, high dynamic range video cameras, and other interactive applications. We demonstrate its performance on both synthetically generated and acquired “real-world” scenes with large dynamic variations of illumination and contrast.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127108129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Schaufler, Julie Dorsey, Xavier Décoret, F. Sillion
{"title":"Conservative volumetric visibility with occluder fusion","authors":"G. Schaufler, Julie Dorsey, Xavier Décoret, F. Sillion","doi":"10.1145/344779.344886","DOIUrl":"https://doi.org/10.1145/344779.344886","url":null,"abstract":"Visibility determination is a key requirement in a wide range of graphics algorithms. This paper introduces a new approach to the computation of volume visibility, the detection of occluded portions of space as seen from a given region. The method is conservative and classifies regions as occluded only when they are guaranteed to be invisible. It operates on a discrete representation of space and uses the opaque interior of objects as occluders. This choice of occluders facilitates their extension into adjacent opaque regions of space, in essence maximizing their size and impact. Our method efficiently detects and represents the regions of space hidden by such occluders. It is the first one to use the property that occluders can also be extended into empty space provided this space is itself occluded from the viewing volume. This proves extremely effective for computing the occlusion by a set of occluders, effectively realizing occluder fusion. An auxiliary data structure represents occlusion in the scene and can then be queried to answer volume visibility questions. We demonstrate the applicability to visibility preprocessing for real-time walkthroughs and to shadow-ray acceleration for extended light sources in ray tracing, with significant acceleration in both cases.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125632828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monte Carlo evaluation of non-linear scattering equations for subsurface reflection","authors":"M. Pharr, P. Hanrahan","doi":"10.1145/344779.344824","DOIUrl":"https://doi.org/10.1145/344779.344824","url":null,"abstract":"We describe a new mathematical framework for solving a wide variety of rendering problems based on a non-linear integral scattering equation. This framework treats the scattering functions of complex aggregate objects as first-class rendering primitives; these scattering functions accurately account for all scattering events inside them. We also describe new techniques for computing scattering functions from the composition of scattering objects. We demonstrate that solution techniques based on this new approach can be more efficient than previous techniques based on radiance transport and the equation of transfer and we apply these techniques to a number of problems in rendering scattering from complex surfaces.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131884771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sampling plausible solutions to multi-body constraint problems","authors":"Stephen Chenney, D. Forsyth","doi":"10.1145/344779.344882","DOIUrl":"https://doi.org/10.1145/344779.344882","url":null,"abstract":"Traditional collision intensive multi-body simulations are difficult to control due to extreme sensitivity to initial conditions or model parameters. Furthermore, there may be multiple ways to achieve any one goal, and it may be difficult to codify a user's preferences before they have seen the available solutions. In this paper we extend simulation models to include plausible sources of uncertainty, and then use a Markov chain Monte Carlo algorithm to sample multiple animations that satisfy constraints. A user can choose the animation they prefer, or applications can take direct advantage of the multiple solutions. Our technique is applicable when a probability can be attached to each animation, with “good” animations having high probability, and for such cases we provide a definition of physical plausibility for animations. We demonstrate our approach with examples of multi-body rigid-body simulations that satisfy constraints of various kinds, for each case presenting animations that are true to a physical model, are significantly different from each other, and yet still satisfy the constraints.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115146926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward a psychophysically-based light reflection model for image synthesis","authors":"F. Pellacini, J. Ferwerda, D. Greenberg","doi":"10.1145/344779.344812","DOIUrl":"https://doi.org/10.1145/344779.344812","url":null,"abstract":"In this paper we introduce a new light reflection model for image synthesis based on experimental studies of surface gloss perception. To develop the model, we've conducted two experiments that explore the relationships between the physical parameters used to describe the reflectance properties of glossy surfaces and the perceptual dimensions of glossy appearance. In the first experiment we use multidimensional scaling techniques to reveal the dimensionality of gloss perception for simulated painted surfaces. In the second experiment we use magnitude estimation methods to place metrics on these dimensions that relate changes in apparent gloss to variations in surface reflectance properties. We use the results of these experiments to rewrite the parameters of a physically-based light reflection model in perceptual terms. The result is a new psychophysically-based light reflection model where the dimensions of the model are perceptually meaningful, and variations along the dimensions are perceptually uniform. We demonstrate that the model can facilitate describing surface gloss in graphics rendering applications. This work represents a new methodology for developing light reflection models for image synthesis.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125821043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Illuminating micro geometry based on precomputed visibility","authors":"W. Heidrich, K. Daubert, J. Kautz, H. Seidel","doi":"10.1145/344779.344984","DOIUrl":"https://doi.org/10.1145/344779.344984","url":null,"abstract":"Many researchers have been arguing that geometry, bump maps, and BRDFs present a hierarchy of detail that should be exploited for efficient rendering purposes. In practice however, this is often not possible due to inconsistencies in the illumination for these different levels of detail. For example, while bump map rendering often only considers direct illumination and no shadows, geometry-based rendering and BRDFs will mostly also respect shadowing effects, and in many cases even indirect illumination caused by scattered light. In this paper, we present an approach for overcoming these inconsistencies. We introduce an inexpensive method for consistently illuminating height fields and bump maps, as well as simulating BRDFs based on precomputed visibility information. With this information we can achieve a consistent illumination across the levels of detail. The method we propose offers significant performance benefits over existing algorithms for computing the light scattering in height fields and for computing a sampled BRDF representation using a virtual gonioreflectometer. The performance can be further improved by utilizing graphics hardware, which then also allows for interactive display. Finally, our method also approximates the changes in illumination when the height field, bump map, or BRDF is applied to a surface with a different curvature.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129982808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relief texture mapping","authors":"M. M. O. Neto, G. Bishop, David K. McAllister","doi":"10.1145/344779.344947","DOIUrl":"https://doi.org/10.1145/344779.344947","url":null,"abstract":"We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: First, it is converted into an ordinary texture using a surprisingly simple 1-D forward transform. The resulting texture is then mapped onto the polygon using standard texture mapping. The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface. The subsequent texture-mapping operation handles the transformation from texture to screen coordinates.","PeriodicalId":269415,"journal":{"name":"Proceedings of the 27th annual conference on Computer graphics and interactive techniques","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128564121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}