{"title":"Integrating flying and fish tank metaphors with cyclopean scale","authors":"C. Ware, Daniel J. Fleet","doi":"10.1109/CGI.1997.601268","DOIUrl":"https://doi.org/10.1109/CGI.1997.601268","url":null,"abstract":"In fish tank VR environments, the screen is used as a window into a virtual environment. This effectively creates a useful 3D workspace in the vicinity of the monitor screen, near to the user. However, many geographical applications require the user to cover large virtual distances and to support this a flying interface is often provided. The authors describe a method for combining the flying and fish tank metaphors to create a practical working environment. The central insight developed in the paper is that a geometric transformation that they call the \"cyclopean scale\" enables the simple combination of flying and fish tank VR interaction metaphors. Cyclopean scale continuously scales the working environment to lie just behind the screen in terms of stereoscopic depth. Cyclopean scale allows for fish tank VR viewing and also places objects at a convenient distance for manipulation, optimizes stereo display parameters and reduces stereo display problems (vergence focus conflict). They have implemented this technique in a system called Fledermaus VR with a cable route editing task. This is an application for planning the layout of submarine cables.","PeriodicalId":285672,"journal":{"name":"Proceedings Computer Graphics International","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125413581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Curve and surface design using multiresolution constraints","authors":"Shigeo Takahashi, Y. Shinagawa, T. Kunii","doi":"10.1109/CGI.1997.601288","DOIUrl":"https://doi.org/10.1109/CGI.1997.601288","url":null,"abstract":"The paper presents a method of designing curves and surfaces by solving the constraints imposed on the shapes at multiresolution levels. In this method, the curves and surfaces are represented by endpoint interpolating B splines and their corresponding wavelets. At each resolution level, the shape is determined by minimizing the energy function subject to the deformation of the shape while preserving the given constraints. Constraints at a low resolution level are converted to those at a high resolution level using wavelet transforms in order to associate all the constraints with the common basis functions. The constraints at multiresolution levels are then solved recursively from low to high resolution levels. Design examples are also presented.","PeriodicalId":285672,"journal":{"name":"Proceedings Computer Graphics International","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126959895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Calligraphic character synthesis using a brush model","authors":"H. Ip, Helena T. F. Wong","doi":"10.1109/CGI.1997.601261","DOIUrl":"https://doi.org/10.1109/CGI.1997.601261","url":null,"abstract":"The paper proposes a novel methodology which allows calligraphic writing to be synthesized realistically. The approach models the physical process of brush stroke creation and consists of three separate aspects, namely, the physical geometry of the writing brush, the dynamic movement, e.g., the position and orientation, of the brush along the stroke trajectory and the amount of ink absorbed in the brush bundle as well as the ink depositing process. By controlling these physical parameters associated with the writing process, very realistic calligraphic writing can be generated. In particular, the aesthetic features commonly associated with calligraphy, such as the varying widths of a stroke, the impression of physical rubbing between the brush and the underlying paper, the varying shades of grey caused by different degrees of ink content in the brush, and the black and white trails created by fast movement of a drying brush can be simulated. This is the first time physically-based model of a brush has been used to synthesize calligraphic writing and the model has been implemented on a PC-based platform.","PeriodicalId":285672,"journal":{"name":"Proceedings Computer Graphics International","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115522994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sketching shadows and highlights to position lights","authors":"Pierre Poulin, K. Ratib, Marco Jacque","doi":"10.1109/CGI.1997.601272","DOIUrl":"https://doi.org/10.1109/CGI.1997.601272","url":null,"abstract":"In inverse shading, a user provides information about the desired shading as she/he would like it to appear in the final image. The computer then interprets this information to identify the best values for the various shading parameters that would lead to the desired visual effect. The authors introduce an approach based on sketching in order to position light sources. Point light sources are positioned by sketches of shadows or highlights. Extended light sources are positioned by sketches of umbra or penumbra. The resulting system allows one to quickly position light sources and to refine their positions interactively and more intuitively.","PeriodicalId":285672,"journal":{"name":"Proceedings Computer Graphics International","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128571239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A cached frame buffer system for object-space parallel processing systems","authors":"Hiroaki Kobayashi","doi":"10.1109/CGI.1997.601296","DOIUrl":"https://doi.org/10.1109/CGI.1997.601296","url":null,"abstract":"The object space parallel processing for global illumination models is one of the most promising approaches to fast photorealistic image synthesis. However, there is a potential bottleneck between processing elements and a frame buffer in massively parallel processing systems based on the object space parallel processing, and this factor may restrict their scalable performance. To solve this problem, the paper presents a novel frame buffer system, named a cached frame buffer system. By adopting the cached frame buffer system into the object space parallel processing systems, the overhead of the frame buffer access due to conflicts and long latency can be reduced, and the potential of the object space parallel processing system with a large number of processing elements will be fully exploited.","PeriodicalId":285672,"journal":{"name":"Proceedings Computer Graphics International","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130496743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VUEMS: a virtual urban environment modeling system","authors":"S. Donikian","doi":"10.1109/CGI.1997.601278","DOIUrl":"https://doi.org/10.1109/CGI.1997.601278","url":null,"abstract":"The Praxitele project is responsible for the design of a new kind of transportation in an urban environment, which consists of a fleet of electric public cars. The realization of such a project requires experimentations with the behaviour of autonomous vehicles in the urban environment. It was necessary to design a virtual urban environment in which simulations could be done. Reproducing the real traffic of a city as completely as possible, implies the simulation of autonomous entities like living beings. Such entities are able to perceive their environment, to communicate with other creatures and to execute some actions either on themselves or on their environment. Interactions between an object and its environment are, most of the time, very simple: sensors and actuators are reduced to minimal capabilities which permit them only to avoid obstacles in a 2D or 3D world. This is due to the fact that databases for virtual environments are often confined to the geometric level, when they must also contain physical, topological and semantic information. Accordingly, we propose a model which is designed to connect different levels of representation, by assembling geometric, topological and semantic data in the field of traffic simulation, and its implementation in VUEMS, a Modelling System of Urban Road Network. From real world data (when available), we construct a model of the virtual urban environment, integrating all the information needed to describe the realistic behaviour of car drivers and pedestrians.","PeriodicalId":285672,"journal":{"name":"Proceedings Computer Graphics International","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129626540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Autonomous animated interactive characters: do we need them?","authors":"B. Blumberg","doi":"10.1109/CGI.1997.601267","DOIUrl":"https://doi.org/10.1109/CGI.1997.601267","url":null,"abstract":"The author addresses the role of autonomy in interactive animated characters. He argues that even a small amount of autonomy can lift what is already a great interactive character into a new dimension. He shows how different levels of autonomy may be appropriate for different types of characters. He then argues that autonomy, intentionality, variability and adaptation are all critical components in creating the illusion of life in interactive characters. He reviews current work in the field and proposes a number of practical applications for autonomous characters in both interactive and non-interactive animation.","PeriodicalId":285672,"journal":{"name":"Proceedings Computer Graphics International","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122847952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Representing and rendering sweep objects using volume models","authors":"G. Sealy","doi":"10.1109/CGI.1997.601264","DOIUrl":"https://doi.org/10.1109/CGI.1997.601264","url":null,"abstract":"The authors describe a method for generating arbitrary sweep objects, where the object being swept may be a 2D contour or a 3D object. The path taken by the object as it is swept can include arbitrary affine transformations, for example stretching and rotating. Thus one can produce effects such as tapering and twisting with ease. The method described composites the object being swept into a volume model at a number of points along its defined path. The points at which the object is composited are determined adaptively.","PeriodicalId":285672,"journal":{"name":"Proceedings Computer Graphics International","volume":"24 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121032138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Matos, L. Velho, J. Gomes, Andr Parente, Heloisa Siffert
{"title":"The Visorama system: a functional overview of a new virtual reality environment","authors":"A. Matos, L. Velho, J. Gomes, Andr Parente, Heloisa Siffert","doi":"10.1109/CGI.1997.601303","DOIUrl":"https://doi.org/10.1109/CGI.1997.601303","url":null,"abstract":"The recent developments in image-based rendering have enabled a representation of virtual environments based on a simulation of panoramas, which we call virtual panoramas. Current virtual panorama systems do not provide a natural and immersive interaction with the environment. We propose a new system that uses hardware and software components to provide a natural and immersive interaction with virtual panoramas. As part of the system we propose a specific representation for the interactions in a virtual panorama. This representation can be used as a basis for the design of a high-level language for the creation of such environments.","PeriodicalId":285672,"journal":{"name":"Proceedings Computer Graphics International","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130303121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reversibly visible polygons and polygonal approximation in two dimensional space","authors":"Sasipalli V. S. Rao, Harada Koichi","doi":"10.1109/CGI.1997.601276","DOIUrl":"https://doi.org/10.1109/CGI.1997.601276","url":null,"abstract":"A digitized picture in a 2D array of points is often desired to be approximated by polygonal lines, with the smallest number of sides under the given error tolerance E. To approximate the polygonal line of such data, we introduce two new terms called \"windows in the edges\" and \"reversibly visible polygons\". We also present linear time algorithms that find minimax polygons, windows in the edges and the reversibly visible polygons. Based on these algorithms we finally produce a general polygonal line that lies in the reversibly visible polygon and approximates the polygonal line of the given data.","PeriodicalId":285672,"journal":{"name":"Proceedings Computer Graphics International","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131098509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}