{"title":"Linear-time dynamics using Lagrange multipliers","authors":"D. Baraff","doi":"10.1145/237170.237226","DOIUrl":"https://doi.org/10.1145/237170.237226","url":null,"abstract":"Current linear-time simulation methods for articulated figures are based exclusively on reduced-coordinate formulations. This paper describes a general, non-iterative linear-time simulation method based instead on Lagrange multipliers. Lagrange multiplier methods are important for computer graphics applications because they bypass the difficult (and often intractable) problem of parameterizing a system’s degrees of freedom. Given a loop-free set of n equality constraints acting between pairs of bodies, the method takes O.n/ time to compute the system’s dynamics. The method does not rely on matrix bandwidth, so no assumptions about the constraints’ topology are needed. Bodies need not be rigid, constraints can be of various dimensions, and unlike reduced-coordinate approaches, nonholonomic (e.g. velocity-dependent) constraints are allowed. An additional set of k one-dimensional constraints which induce loops and/or handle inequalities can be accommodated with cost O.kn/. This makes it practical to simulate complicated, closedloop articulated figures with joint-limits and contact at interactive rates. A complete description of a sample implementation is provided in pseudocode.","PeriodicalId":397587,"journal":{"name":"Proceedings of the 23rd annual conference on Computer graphics and interactive techniques","volume":"1997 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131082999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combining frequency and spatial domain information for fast interactive image noise removal","authors":"A. N. Hirani, T. Totsuka","doi":"10.1145/237170.237264","DOIUrl":"https://doi.org/10.1145/237170.237264","url":null,"abstract":"Scratches on old films must be removed since these are more noticeable on higher definition and digital televisions. Wires that suspend actors or cars must be carefully erased during post production of special effects shots. Both of these are time consuming tasks but can be addressed by the following image restoration process: given the locations of noisy pixels to be replaced and a prototype image, restore those noisy pixels in a natural way. We call it image noise removal and this paper describes its fast iterative algorithm. Most existing algorithms for removing image noise use either frequency domain information (e.g low pass filtering) or spatial domain information (e.g median filtering or stochastic texture generation). The few that do combine the two domains place the limitation that the image be band limited and the band limits be known. Our algorithm works in both spatial and frequency domains without placing the limitations about band limits, making it possible to fully exploit advantages from each domain. While global features and large textures are captured in frequency domain, local continuity and sharpness are maintained in spatial domain. With a judicious choice of operations and domains in which they work, our dual-domain approach can reconstruct many contiguous noisy pixels in areas with large patterns while maintaining continuity of features such as lines. In addition, the image intensity does not have to be uniform. These are significant advantages over existing algorithms. Our algorithm is based on a general framework of projection onto convex sets (POCS). Any image analysis technique that can be described as a closed convex set can be cleanly plugged into the iteration loop of our algorithm. This is another important advantage of our algorithm. CR Categories: I.3.3 [Computer Graphics]: Picture / Image Generation; Display Algorithms; I.3.6 [Computer Graphics]: Methodology and Techniques – Interaction techniques; I.4.4 [Image Processing]: Restoration; I.4.9 [Image Processing]: Applications. Additional","PeriodicalId":397587,"journal":{"name":"Proceedings of the 23rd annual conference on Computer graphics and interactive techniques","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128905871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"View morphing","authors":"S. Seitz, C. Dyer","doi":"10.1145/237170.237196","DOIUrl":"https://doi.org/10.1145/237170.237196","url":null,"abstract":"Image morphing techniques can generate compelling 2D transitions between images. However, differences in object pose or viewpoint often cause unnatural distortions in image morphs that are difficult to correct manually. Using basic principles of projective geometry, this paper introduces a simple extension to image morphing that cor- rectly handles 3D projective camera and scene transformations. The technique, called view morphing , works by prewarping two images prior to computing a morph and then postwarping the interpolated images. Because no knowledge of 3D shape is required, the tech- nique may be applied to photographs and drawings, as well as rendered scenes. The ability to synthesize changes both in viewpoint and image structure affords a wide variety of interesting 3D effects via simple image transformations.","PeriodicalId":397587,"journal":{"name":"Proceedings of the 23rd annual conference on Computer graphics and interactive techniques","volume":"40 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120919106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Painterly rendering for animation","authors":"Barbara J. Meier","doi":"10.1145/237170.237288","DOIUrl":"https://doi.org/10.1145/237170.237288","url":null,"abstract":"We present a technique for rendering animations in a painterly style. The difficulty in using existing still frame methods for animation is getting the paint to “stick” to surfaces rather than randomly change with each frame, while still retaining a hand-crafted look. We extend the still frame method to animation by solving two major specific problems of previous techniques. First our method eliminates the “shower door” effect in which an animation appears as if it were being viewed through textured glass because brush strokes stick to the viewplane not to the animating surfaces. Second, our technique provides for frame-to-frame coherence in animations so that the resulting frames do not randomly change every frame. To maintain coherence, we model surfaces as 3d particle sets which are rendered as 2d paint brush strokes in screen space much like an artist lays down brush strokes on a canvas. We use geometric and lighting properties of the surfaces to control the appearanceof brush strokes. This powerful combination of using 3d particles, surface lighting information, and rendering 2d brush strokes in screen space gives us the painterly style we desire and forces the brush strokes to stick to animating surfaces. By varying lighting and choosing brush stroke parameters we can create many varied painterly styles. We illustrate the method with images and animated sequences and present specific technical and creative suggestions for achieving different looks. CR","PeriodicalId":397587,"journal":{"name":"Proceedings of the 23rd annual conference on Computer graphics and interactive techniques","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134526410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Progressive meshes","authors":"Hugues Hoppe","doi":"10.1145/237170.237216","DOIUrl":"https://doi.org/10.1145/237170.237216","url":null,"abstract":"Highly detailed geometric models are rapidly becoming commonplace in computer graphics. These models, often represented as complex triangle meshes, challenge rendering performance, transmission bandwidth, and storage capacities. This paper introduces the progressive mesh (PM) representation, a new scheme for storing and transmitting arbitrary triangle meshes. This efficient, lossless, continuous-resolution representation addresses several practical problems in graphics: smooth geomorphing of level-of-detail approximations, progressive transmission, mesh compression, and selective refinement. In addition, we present a new mesh simplification procedure for constructing a PM representation from an arbitrary mesh. The goal of this optimization procedure is to preserve not just the geometry of the original mesh, but more importantly its overall appearance as defined by its discrete and scalar appearance attributes such as material identifiers, color values, normals, and texture coordinates. We demonstrate construction of the PM representation and its applications using several practical models.","PeriodicalId":397587,"journal":{"name":"Proceedings of the 23rd annual conference on Computer graphics and interactive techniques","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123604433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Mark, S. Randolph, Mark Finch, James M. Van Verth, Russell M. Taylor
{"title":"Adding force feedback to graphics systems: issues and solutions","authors":"W. Mark, S. Randolph, Mark Finch, James M. Van Verth, Russell M. Taylor","doi":"10.1145/237170.237284","DOIUrl":"https://doi.org/10.1145/237170.237284","url":null,"abstract":"Integrating force feedback with a complete real-time virtual environment system presents problems which are more difficult than those encountered in building simpler forcefeedback systems. In particular, lengthy computations for graphics or simulation require a decoupling of the haptic servo loop from the main application loop if high-quality forces are to be produced. We present some approaches to these problems and describe our force-feedback software library which implements these techniques and provides other benefits including haptic-textured surfaces, device independence, distributed operation and easy enhancement.","PeriodicalId":397587,"journal":{"name":"Proceedings of the 23rd annual conference on Computer graphics and interactive techniques","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114189498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A cellular texture basis function","authors":"Steven Worley","doi":"10.1145/237170.237267","DOIUrl":"https://doi.org/10.1145/237170.237267","url":null,"abstract":"Solid texturing is a powerful way to add detail to the surface of rendered objects. Perlin’s “noise” is a 3D basis function used in some of the most dramatic and useful surface texture algorithms. We present a new basis function which complements Perlin noise, based on a partitioning of space into a random array of cells. We have used this new basis function to produce textured surfaces resembling flagstone-like tiled areas, organic crusty skin, crumpled paper, ice, rock, mountain ranges, and craters. The new basis function can be computed efficiently without the need for precalculation or table storage.","PeriodicalId":397587,"journal":{"name":"Proceedings of the 23rd annual conference on Computer graphics and interactive techniques","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125335513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comic Chat","authors":"David J. Kurlander, Tim Skelly, D. Salesin","doi":"10.1145/237170.237260","DOIUrl":"https://doi.org/10.1145/237170.237260","url":null,"abstract":"Comics have a rich visual vocabulary, and people find them appealing. They are also an effective form of communication. We have built a system, called Comic Chat, that represents on-line communications in the form of comics. Comic Chat automates numerous aspects of comics generation, including balloon construction and layout, the placement and orientation of comic characters, the default selection of character gestures and expressions, the incorporation of semantic panel elements, and the choice of zoom factor for the virtual camera. This paper describes the mechanisms that Comic Chat uses to perform this automation, as well as novel aspects of the program’s user interface. Comic Chat is a working program, allowing groups of people to communicate over the Internet. It has several advantages over other graphical chat programs, including the availability of a graphical history, and a dynamic graphical presentation.","PeriodicalId":397587,"journal":{"name":"Proceedings of the 23rd annual conference on Computer graphics and interactive techniques","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125398713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Live computer animation (panel)","authors":"Tim Heidmann","doi":"10.1145/237170.237305","DOIUrl":"https://doi.org/10.1145/237170.237305","url":null,"abstract":"computer aided design, scientific computation, visual simulation, and film special effects have become so sophisticated in their ability to generate high-quality, real-time computer animation that they can be used for live creation of graphics and effects for television broadcast. Although the field typically is dominated by special-purpose video processing hardware, these machines are beginning to show up in a variety of innovative applications.","PeriodicalId":397587,"journal":{"name":"Proceedings of the 23rd annual conference on Computer graphics and interactive techniques","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130044774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advanced television for the United States: status and issues (panel)","authors":"G. Demos","doi":"10.1145/237170.237297","DOIUrl":"https://doi.org/10.1145/237170.237297","url":null,"abstract":"television service, \" ACATS, \" appointed by the FCC, is recommending that we deploy a new television system which includes interlace, 59.94 and 60 Hz, and non-square pixel spacing.These parameters are fundamentally incompatible with modern computer graphics displays, which do not use interlace and which operate at display rates exceeding 70 Hz. Those involved in developing the ACATS do not feel a need to restrict their proposed formats to those that are compatible with computer displays. They further are asserting that the cost and quality loss associated with converting interlaced 60 Hz video to 70+ Hz non-interlaced (progressive scan) pictures is acceptable. However, under their scenario, responsibility to de-interlace and frame rate convert these proposed formats for every computer display would fall upon the computer industry. Other problematic issues include interlace and 59.94/60Hz, lack of a robust data capacity, non-square pixel spacing in some formats, lack of defined overlay planes, a controversial \" 16:9 \" image aspect ratio, overscan issues, and limited colorimetry. It is also proposed that receiving devices decode each of 18 different image formats, spanning a 6:1 range in resolution and data rate. ACATS proposes that format problems be solved by a later migration strategy away from admittedly obsolete techniques, such as interlaced scanning. However, no scenario is suggested for accomplishing this after deployment of a new television infrastructure, after which some it may be impossible to replace the interlaced portions of the system. The deployment of a new national television infrastructure is an opportunity to leave behind the obsolete NTSC television system. Since the primary distribution media for computer graphics are film and video, the introduction of new video formats will affect the work of digital production facilities. ACATS claims that their proposal is an appropriate compromise, containing some non-interlaced, and some interlaced formats. ACATS is aware that the computer industry needs display rates exceeding 70 Hz for large and bright screens containing computer information. However, they assert that interlace and 59.94 and 60 Hz are needed more than non-interlace 70+ Hz by the existing NTSC broadcast infrastructure and by existing television manufacturers. This panel offers an opportunity to challenge these assertions. Motion picture film runs at 24 frames per second. The display rate of 72 Hz is naturally suggested since 72 is three times 24. This would satisfy the need of computer displays to exceed 70 Hz. The frame rate of 36 would also form …","PeriodicalId":397587,"journal":{"name":"Proceedings of the 23rd annual conference on Computer graphics and interactive techniques","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133320839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}