Darius Burschka, Gregory Hager, Z. Dodds, Martin Jägersand, Dana Cobzas, Keith Yerex
{"title":"Recent methods for image-based modeling and rendering","authors":"Darius Burschka, Gregory Hager, Z. Dodds, Martin Jägersand, Dana Cobzas, Keith Yerex","doi":"10.1109/VR.2003.1191174","DOIUrl":"https://doi.org/10.1109/VR.2003.1191174","url":null,"abstract":"A long-standing goal in image-based modeling and rendering is to capture a scene from camera images and construct a sufficient model to allow photo-realistic rendering of new views. With the confluence of computer graphics and vision, the combination of research on recovering geometric structure from un-calibrated cameras with modeling and rendering has yielded numerous new methods. Yet, many challenging issues remain to be addressed before a sufficiently general and robust system could be built to (for instance) allow an average user to model their home and garden from camcorder video. This tutorial aims to give researchers and students in computer graphics a working knowledge of relevant theory and techniques covering the steps from real-time vision for tracking and the capture of scene geometry and appearance, to the efficient representation and real-time rendering of image-based models. It also includes hands-on demos of real-time visual tracking, modeling and rendering systems.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115259935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
U. Bockholt, A. Bisler, M. Becker, W. Müller-Wittig, G. Voss
{"title":"Augmented reality for enhancement of endoscopic interventions","authors":"U. Bockholt, A. Bisler, M. Becker, W. Müller-Wittig, G. Voss","doi":"10.1109/VR.2003.1191126","DOIUrl":"https://doi.org/10.1109/VR.2003.1191126","url":null,"abstract":"Computer assisted operation planning systems win more and more recognition in the field of surgery. These systems offer new possibilities to prepare an intervention with the goal to shorten the expansive time in the operation room required for the intervention. The safest and most effective surgical approach should be selected. But often, it is difficult to transfer the output of the planning system to the intra-operative situation and so to consider the planning results in the real intervention. At the Fraunhofer Institute for Computer Graphics (IGD) in Darmstadt and the Centre for Advanced Media Technology (CAMTech) in Singapore, methods are developed to bridge the gap between the external planning session and the intra-operative case: augmented reality (AR) techniques are used to overlap preoperative scanned image data as well as results of the planning session to the operation field.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129053212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Easy calibration of a head-mounted projective display for augmented reality systems","authors":"Chunyu Gao, H. Hua, N. Ahuja","doi":"10.1109/VR.2003.1191121","DOIUrl":"https://doi.org/10.1109/VR.2003.1191121","url":null,"abstract":"Augmented reality (AR) superimposes computer-generated virtual images on the real world to allow users exploring both virtual and real worlds simultaneously. For a successful augmented reality application, an accurate registration of a virtual object with its physical counterpart has to be achieved, which requires precise knowledge of the projection information of the viewing device. The paper proposes a fast and easy off-line calibration strategy based on well-established camera calibration methods. Our method does not need exhausting effort on the collection of world-to-image correspondence data. All the correspondence data are sampled with an image based method and they are able to achieve sub-pixel accuracy. The method is applicable for all AR systems based on optical see-through head-mounted display (HMD), though we took a head-mounted projective display (HMPD) as the example. We first review the calibration requirements for an augmented reality system and the existing calibration methods. Then a new view projection model for optical see through HMD is addressed in detail, and proposed calibration method and experimental result are presented. Finally, the evaluation experiments and error analysis are also included. The evaluation results show that our calibration method is fairly accurate and consistent.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123200830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Depth perception and visual after-effects at stereoscopic workbench displays","authors":"T. Alexander, J. Conradi, C. Winkelholz","doi":"10.1109/VR.2003.1191152","DOIUrl":"https://doi.org/10.1109/VR.2003.1191152","url":null,"abstract":"The vivid and clear way of virtual scene presentation in virtual environments (VE) is nearly exclusively accomplished by stereoscopic displays. Depth perception with these displays differs from the viewing conditions in reality and causes usability problems. For this reason three important aspects of visualization at a stereoscopic workbench display were analyzed. The results of three experiments with terrain data as an example application show a significant increase of depth perception when using stereoscopy, while map texturing causes a significant decrease. For additional wireframe texture and head-tracking, no significant effects were found. With regards to the upper and lower bounds of stereoscopic visualization a linear relationship between the maximum elevation of single objects and the distance between fixation and projection plane was specified by regression analysis. Finally, it is shown that 1/2 hour activity at such a display does not result in negative after-effects for the visual system, including visual acuity, phoria, fusion, and stereoscopy. These results suggest the use of stereoscopic workbench displays for presentation of three-dimensional terrain data. In contrast, deficits of depth perception are verified resulting from overload of visual information, or from using a parallax which is too large for the presentation.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127932581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward the innovative collaboration between art and science: the task in the age of media culture through case studies in the contemporary field of media arts","authors":"Itsuo Sakane","doi":"10.1109/VR.2003.1191134","DOIUrl":"https://doi.org/10.1109/VR.2003.1191134","url":null,"abstract":"Since the middle of the 1960s, a new movement toward the collaboration between art andtechnology has been growing all over the world almost at the same time, partly influenced by thecritical writing of C.P. Snow's \"The Two Cultures\" and Georgy Kepes's insightful essays in\"The New Landscape\", and partly by the appearance of new media technology expedited by thetheory of Marshal MacLuhan. From the 1970s through 80s, this movement has been graduallyshifting to the digital media arena due to expanding computer technology. Since then, its majorcreative trend, enhancing the collaboration between art, science, and technology, has becomeeven stronger, and it has been appealing to society as one of the most desirable culturalcontributions in history. I have been witnessing such historical movements since the 1960s as a journalist until recently, and I cannot help but think that without such an active integrationbetween the artistic sensibility and scientific way of thinking in the future, we will be unable toovercome the conflicts among different cultures in the world, which have become more andmore serious.Since the beginning of the 1980s, the introduction of powerful digital media technology hasgiven us the potential to make this integration more feasible. By using such media technology asa bridging tool, we now have new scope to make the collaboration between art and scienceeasier. For these reasons, in the past 20 years, ambitious artists and even engineers who areinterested in artistic expression have started to create innovative artistic works based on suchintegration. Especially by using the unique character of digital media, which can bridge thetraditional art genre or category, radically new forms of media art have been created in the pastfew years. In such an environment, new initiatives to establish media art/design schools or mediascience/art institutions are in progress throughout the world. Our school, IAMAS, was organizedas one of such creative institutions in 1996 in Gifu, Japan. After 7 year's efforts through trial anderror, we have been successful in producing new outputs based on such collaboration between artand science. I myself have been involved in administering the school from the beginning,targeting for a better systems base, relying on my own experience since the 60s and theteamwork among our staff and our long time friends in the fields of arts and sciences around theworld.In my presentation, I will show you some of the examples of recent creative works realizedthrough such innovative integration, made by unique artists with engineering skills, and also byscientists or engineers with artistic sensibility. Some of them have backgrounds in both art andscience and have learned the joy of collaboration. I will also show some of the works made bythe students of those new media institutions, including some from our school. If you have beeninterested in such collaboration between art and science previously, you might have seen some oft","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130972625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editing real world scenes: augmented reality with image-based rendering","authors":"Dana Cobzas, Martin Jägersand, Keith Yerex","doi":"10.1109/VR.2003.1191169","DOIUrl":"https://doi.org/10.1109/VR.2003.1191169","url":null,"abstract":"We present a method that using only an uncalibrated camera allows the capture of object geometry and appearance, and then at a later stage registration and AR overlay into a new scene. Using only image information first a coarse object geometry is obtained using structure-from-motion, then a dynamic, view dependent texture is estimated to account for the differences between the reprojected coarse model and the training images. In AR rendering, the object structure is interactively aligned in one frame by the user, object and scene structure is registered, and rendered in subsequent frames by a virtual scene camera, with parameters estimated from real-time visual tracking. Using the same viewing geometry for both object acquisition, registration, and rendering ensures consistency and minimizes errors.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124133717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Introduction to VR technology","authors":"Jerry Isdale","doi":"10.1109/VR.2003.1191178","DOIUrl":"https://doi.org/10.1109/VR.2003.1191178","url":null,"abstract":"This tutorial provides a fast paced introduction to the technologies of virtual reality, providing a foundation to comprehend the other activities at VR2003. Attendees will learn the component technologies that allow for the creation and experience of desktop and immersive virtual worlds. Each technology topic will provide a definition, brief introduction to the methods and issues, example systems, and references for further investigation (background, research, and commercial). Topics will include the basic hardware and software technologies and also touch on design issues such as usability and story elements.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115098276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of latency on presence in stressful virtual environments","authors":"M. Meehan, Sharif Razzaque, M. Whitton, F. Brooks","doi":"10.1109/VR.2003.1191132","DOIUrl":"https://doi.org/10.1109/VR.2003.1191132","url":null,"abstract":"Previous research has shown that even low end-to-end latency can have adverse effects on performance in virtual environments (VE). This paper reports on an experiment investigating the effect of latency on other metrics of VE effectiveness: physiological response, simulator sickness, and self-reported sense of presence. The VE used in the study includes two rooms: the first is normal and non-threatening; the second is designed to evoke a fear/stress response. Participants were assigned to either a low latency (/spl sim/50 ms) or high latency (/spl sim/90 ms) group. Participants in the low latency condition had a higher self-reported sense of presence and a statistically higher change in heart rate between the two rooms than did those in the high latency condition. There were no significant relationships between latency and simulator sickness.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130548113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongsik Cho, Jihye Park, G. Kim, Sangwoo Hong, S. Han, Seungyong Lee
{"title":"The dichotomy of presence elements: the where and what","authors":"Dongsik Cho, Jihye Park, G. Kim, Sangwoo Hong, S. Han, Seungyong Lee","doi":"10.1109/VR.2003.1191155","DOIUrl":"https://doi.org/10.1109/VR.2003.1191155","url":null,"abstract":"One of the goals and defining characteristics of virtual reality systems is to create \"presence\" and fool the user into believing that one is, or is doing something \"in\" the synthetic environment. Most research and papers on presence to date have been directed toward coming up with the definitions of presence, and based on them, identifying key elements that affect presence. We carried out an elaborate experiment in which presence levels were measured (with subjective questionnaire) in test virtual worlds configured with different combinations of six visual presence elements.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125490535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The blue-c distributed scene graph","authors":"M. Näf, Edouard Lamboray, O. Staadt, M. Gross","doi":"10.1109/VR.2003.1191157","DOIUrl":"https://doi.org/10.1109/VR.2003.1191157","url":null,"abstract":"We present a distributed scene graph architecture for use in the blue-c, a novel collaborative immersive virtual environment. We extend the widely used OpenGL Performer toolkit to provide a distributed scene graph maintaining full synchronization down to vertex and texel level. We propose a synchronization scheme including customizable, relaxed locking mechanisms. We demonstrate the functionality of our toolkit with two prototype applications in our high-performance virtual reality and visual simulation environment.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115828756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}