IEEE Virtual Reality, 2003. Proceedings.最新文献

筛选
英文 中文
Recent methods for image-based modeling and rendering 基于图像的建模和渲染的最新方法
IEEE Virtual Reality, 2003. Proceedings. Pub Date : 2003-03-22 DOI: 10.1109/VR.2003.1191174
Darius Burschka, Gregory Hager, Z. Dodds, Martin Jägersand, Dana Cobzas, Keith Yerex
{"title":"Recent methods for image-based modeling and rendering","authors":"Darius Burschka, Gregory Hager, Z. Dodds, Martin Jägersand, Dana Cobzas, Keith Yerex","doi":"10.1109/VR.2003.1191174","DOIUrl":"https://doi.org/10.1109/VR.2003.1191174","url":null,"abstract":"A long-standing goal in image-based modeling and rendering is to capture a scene from camera images and construct a sufficient model to allow photo-realistic rendering of new views. With the confluence of computer graphics and vision, the combination of research on recovering geometric structure from un-calibrated cameras with modeling and rendering has yielded numerous new methods. Yet, many challenging issues remain to be addressed before a sufficiently general and robust system could be built to (for instance) allow an average user to model their home and garden from camcorder video. This tutorial aims to give researchers and students in computer graphics a working knowledge of relevant theory and techniques covering the steps from real-time vision for tracking and the capture of scene geometry and appearance, to the efficient representation and real-time rendering of image-based models. It also includes hands-on demos of real-time visual tracking, modeling and rendering systems.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115259935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Augmented reality for enhancement of endoscopic interventions 增强现实增强内窥镜干预
IEEE Virtual Reality, 2003. Proceedings. Pub Date : 2003-03-22 DOI: 10.1109/VR.2003.1191126
U. Bockholt, A. Bisler, M. Becker, W. Müller-Wittig, G. Voss
{"title":"Augmented reality for enhancement of endoscopic interventions","authors":"U. Bockholt, A. Bisler, M. Becker, W. Müller-Wittig, G. Voss","doi":"10.1109/VR.2003.1191126","DOIUrl":"https://doi.org/10.1109/VR.2003.1191126","url":null,"abstract":"Computer assisted operation planning systems win more and more recognition in the field of surgery. These systems offer new possibilities to prepare an intervention with the goal to shorten the expansive time in the operation room required for the intervention. The safest and most effective surgical approach should be selected. But often, it is difficult to transfer the output of the planning system to the intra-operative situation and so to consider the planning results in the real intervention. At the Fraunhofer Institute for Computer Graphics (IGD) in Darmstadt and the Centre for Advanced Media Technology (CAMTech) in Singapore, methods are developed to bridge the gap between the external planning session and the intra-operative case: augmented reality (AR) techniques are used to overlap preoperative scanned image data as well as results of the planning session to the operation field.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129053212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Easy calibration of a head-mounted projective display for augmented reality systems 易于校准的头戴式投影显示增强现实系统
IEEE Virtual Reality, 2003. Proceedings. Pub Date : 2003-03-22 DOI: 10.1109/VR.2003.1191121
Chunyu Gao, H. Hua, N. Ahuja
{"title":"Easy calibration of a head-mounted projective display for augmented reality systems","authors":"Chunyu Gao, H. Hua, N. Ahuja","doi":"10.1109/VR.2003.1191121","DOIUrl":"https://doi.org/10.1109/VR.2003.1191121","url":null,"abstract":"Augmented reality (AR) superimposes computer-generated virtual images on the real world to allow users exploring both virtual and real worlds simultaneously. For a successful augmented reality application, an accurate registration of a virtual object with its physical counterpart has to be achieved, which requires precise knowledge of the projection information of the viewing device. The paper proposes a fast and easy off-line calibration strategy based on well-established camera calibration methods. Our method does not need exhausting effort on the collection of world-to-image correspondence data. All the correspondence data are sampled with an image based method and they are able to achieve sub-pixel accuracy. The method is applicable for all AR systems based on optical see-through head-mounted display (HMD), though we took a head-mounted projective display (HMPD) as the example. We first review the calibration requirements for an augmented reality system and the existing calibration methods. Then a new view projection model for optical see through HMD is addressed in detail, and proposed calibration method and experimental result are presented. Finally, the evaluation experiments and error analysis are also included. The evaluation results show that our calibration method is fairly accurate and consistent.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123200830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Depth perception and visual after-effects at stereoscopic workbench displays 立体工作台展示的深度感知和视觉效果
IEEE Virtual Reality, 2003. Proceedings. Pub Date : 2003-03-22 DOI: 10.1109/VR.2003.1191152
T. Alexander, J. Conradi, C. Winkelholz
{"title":"Depth perception and visual after-effects at stereoscopic workbench displays","authors":"T. Alexander, J. Conradi, C. Winkelholz","doi":"10.1109/VR.2003.1191152","DOIUrl":"https://doi.org/10.1109/VR.2003.1191152","url":null,"abstract":"The vivid and clear way of virtual scene presentation in virtual environments (VE) is nearly exclusively accomplished by stereoscopic displays. Depth perception with these displays differs from the viewing conditions in reality and causes usability problems. For this reason three important aspects of visualization at a stereoscopic workbench display were analyzed. The results of three experiments with terrain data as an example application show a significant increase of depth perception when using stereoscopy, while map texturing causes a significant decrease. For additional wireframe texture and head-tracking, no significant effects were found. With regards to the upper and lower bounds of stereoscopic visualization a linear relationship between the maximum elevation of single objects and the distance between fixation and projection plane was specified by regression analysis. Finally, it is shown that 1/2 hour activity at such a display does not result in negative after-effects for the visual system, including visual acuity, phoria, fusion, and stereoscopy. These results suggest the use of stereoscopic workbench displays for presentation of three-dimensional terrain data. In contrast, deficits of depth perception are verified resulting from overload of visual information, or from using a parallax which is too large for the presentation.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127932581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Toward the innovative collaboration between art and science: the task in the age of media culture through case studies in the contemporary field of media arts 走向艺术与科学的创新合作:媒体文化时代的任务——当代媒体艺术领域的个案研究
IEEE Virtual Reality, 2003. Proceedings. Pub Date : 2003-03-22 DOI: 10.1109/VR.2003.1191134
Itsuo Sakane
{"title":"Toward the innovative collaboration between art and science: the task in the age of media culture through case studies in the contemporary field of media arts","authors":"Itsuo Sakane","doi":"10.1109/VR.2003.1191134","DOIUrl":"https://doi.org/10.1109/VR.2003.1191134","url":null,"abstract":"Since the middle of the 1960s, a new movement toward the collaboration between art andtechnology has been growing all over the world almost at the same time, partly influenced by thecritical writing of C.P. Snow's \"The Two Cultures\" and Georgy Kepes's insightful essays in\"The New Landscape\", and partly by the appearance of new media technology expedited by thetheory of Marshal MacLuhan. From the 1970s through 80s, this movement has been graduallyshifting to the digital media arena due to expanding computer technology. Since then, its majorcreative trend, enhancing the collaboration between art, science, and technology, has becomeeven stronger, and it has been appealing to society as one of the most desirable culturalcontributions in history. I have been witnessing such historical movements since the 1960s as a journalist until recently, and I cannot help but think that without such an active integrationbetween the artistic sensibility and scientific way of thinking in the future, we will be unable toovercome the conflicts among different cultures in the world, which have become more andmore serious.Since the beginning of the 1980s, the introduction of powerful digital media technology hasgiven us the potential to make this integration more feasible. By using such media technology asa bridging tool, we now have new scope to make the collaboration between art and scienceeasier. For these reasons, in the past 20 years, ambitious artists and even engineers who areinterested in artistic expression have started to create innovative artistic works based on suchintegration. Especially by using the unique character of digital media, which can bridge thetraditional art genre or category, radically new forms of media art have been created in the pastfew years. In such an environment, new initiatives to establish media art/design schools or mediascience/art institutions are in progress throughout the world. Our school, IAMAS, was organizedas one of such creative institutions in 1996 in Gifu, Japan. After 7 year's efforts through trial anderror, we have been successful in producing new outputs based on such collaboration between artand science. I myself have been involved in administering the school from the beginning,targeting for a better systems base, relying on my own experience since the 60s and theteamwork among our staff and our long time friends in the fields of arts and sciences around theworld.In my presentation, I will show you some of the examples of recent creative works realizedthrough such innovative integration, made by unique artists with engineering skills, and also byscientists or engineers with artistic sensibility. Some of them have backgrounds in both art andscience and have learned the joy of collaboration. I will also show some of the works made bythe students of those new media institutions, including some from our school. If you have beeninterested in such collaboration between art and science previously, you might have seen some oft","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130972625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Editing real world scenes: augmented reality with image-based rendering 编辑真实世界的场景:增强现实与基于图像的渲染
IEEE Virtual Reality, 2003. Proceedings. Pub Date : 2003-03-22 DOI: 10.1109/VR.2003.1191169
Dana Cobzas, Martin Jägersand, Keith Yerex
{"title":"Editing real world scenes: augmented reality with image-based rendering","authors":"Dana Cobzas, Martin Jägersand, Keith Yerex","doi":"10.1109/VR.2003.1191169","DOIUrl":"https://doi.org/10.1109/VR.2003.1191169","url":null,"abstract":"We present a method that using only an uncalibrated camera allows the capture of object geometry and appearance, and then at a later stage registration and AR overlay into a new scene. Using only image information first a coarse object geometry is obtained using structure-from-motion, then a dynamic, view dependent texture is estimated to account for the differences between the reprojected coarse model and the training images. In AR rendering, the object structure is interactively aligned in one frame by the user, object and scene structure is registered, and rendered in subsequent frames by a virtual scene camera, with parameters estimated from real-time visual tracking. Using the same viewing geometry for both object acquisition, registration, and rendering ensures consistency and minimizes errors.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124133717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Introduction to VR technology VR技术简介
IEEE Virtual Reality, 2003. Proceedings. Pub Date : 2003-03-22 DOI: 10.1109/VR.2003.1191178
Jerry Isdale
{"title":"Introduction to VR technology","authors":"Jerry Isdale","doi":"10.1109/VR.2003.1191178","DOIUrl":"https://doi.org/10.1109/VR.2003.1191178","url":null,"abstract":"This tutorial provides a fast paced introduction to the technologies of virtual reality, providing a foundation to comprehend the other activities at VR2003. Attendees will learn the component technologies that allow for the creation and experience of desktop and immersive virtual worlds. Each technology topic will provide a definition, brief introduction to the methods and issues, example systems, and references for further investigation (background, research, and commercial). Topics will include the basic hardware and software technologies and also touch on design issues such as usability and story elements.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115098276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Effect of latency on presence in stressful virtual environments 延迟对压力虚拟环境中存在感的影响
IEEE Virtual Reality, 2003. Proceedings. Pub Date : 2003-03-22 DOI: 10.1109/VR.2003.1191132
M. Meehan, Sharif Razzaque, M. Whitton, F. Brooks
{"title":"Effect of latency on presence in stressful virtual environments","authors":"M. Meehan, Sharif Razzaque, M. Whitton, F. Brooks","doi":"10.1109/VR.2003.1191132","DOIUrl":"https://doi.org/10.1109/VR.2003.1191132","url":null,"abstract":"Previous research has shown that even low end-to-end latency can have adverse effects on performance in virtual environments (VE). This paper reports on an experiment investigating the effect of latency on other metrics of VE effectiveness: physiological response, simulator sickness, and self-reported sense of presence. The VE used in the study includes two rooms: the first is normal and non-threatening; the second is designed to evoke a fear/stress response. Participants were assigned to either a low latency (/spl sim/50 ms) or high latency (/spl sim/90 ms) group. Participants in the low latency condition had a higher self-reported sense of presence and a statistically higher change in heart rate between the two rooms than did those in the high latency condition. There were no significant relationships between latency and simulator sickness.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130548113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 280
The dichotomy of presence elements: the where and what 存在元素的二分法:地点和内容
IEEE Virtual Reality, 2003. Proceedings. Pub Date : 2003-03-22 DOI: 10.1109/VR.2003.1191155
Dongsik Cho, Jihye Park, G. Kim, Sangwoo Hong, S. Han, Seungyong Lee
{"title":"The dichotomy of presence elements: the where and what","authors":"Dongsik Cho, Jihye Park, G. Kim, Sangwoo Hong, S. Han, Seungyong Lee","doi":"10.1109/VR.2003.1191155","DOIUrl":"https://doi.org/10.1109/VR.2003.1191155","url":null,"abstract":"One of the goals and defining characteristics of virtual reality systems is to create \"presence\" and fool the user into believing that one is, or is doing something \"in\" the synthetic environment. Most research and papers on presence to date have been directed toward coming up with the definitions of presence, and based on them, identifying key elements that affect presence. We carried out an elaborate experiment in which presence levels were measured (with subjective questionnaire) in test virtual worlds configured with different combinations of six visual presence elements.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125490535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
The blue-c distributed scene graph 蓝-c分布场景图
IEEE Virtual Reality, 2003. Proceedings. Pub Date : 2003-03-22 DOI: 10.1109/VR.2003.1191157
M. Näf, Edouard Lamboray, O. Staadt, M. Gross
{"title":"The blue-c distributed scene graph","authors":"M. Näf, Edouard Lamboray, O. Staadt, M. Gross","doi":"10.1109/VR.2003.1191157","DOIUrl":"https://doi.org/10.1109/VR.2003.1191157","url":null,"abstract":"We present a distributed scene graph architecture for use in the blue-c, a novel collaborative immersive virtual environment. We extend the widely used OpenGL Performer toolkit to provide a distributed scene graph maintaining full synchronization down to vertex and texel level. We propose a synchronization scheme including customizable, relaxed locking mechanisms. We demonstrate the functionality of our toolkit with two prototype applications in our high-performance virtual reality and visual simulation environment.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115828756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信