Proceedings Virtual Reality Annual International Symposium '95最新文献

筛选
英文 中文
Model based vision as feedback for virtual reality robotics environments 基于模型的视觉反馈在虚拟现实机器人环境中的应用
Proceedings Virtual Reality Annual International Symposium '95 Pub Date : 1995-03-11 DOI: 10.1109/VRAIS.1995.512486
E. Natonek, T. Zimmerman, Lorenzo Flueckiger
{"title":"Model based vision as feedback for virtual reality robotics environments","authors":"E. Natonek, T. Zimmerman, Lorenzo Flueckiger","doi":"10.1109/VRAIS.1995.512486","DOIUrl":"https://doi.org/10.1109/VRAIS.1995.512486","url":null,"abstract":"Task definition methods for robotic systems are often difficult to use. The \"on-line\" programming methods are often time expensive or risky for the human operator or the robot itself. On the other hand, \"off-line\" techniques are tedious and complex. In addition operator training is costly and time consuming. In a Virtual Reality Robotics Environment (VRRE), users are not asked to write down complicated functions, but can operate complex robotic systems in an intuitive and cost-effective way. However a VRRE is only effective if all the environment changes and object movements are fed-back to the virtual manipulating system. The paper describes the use of a VRRE for a semi-autonomous robot system comprising an industrial 5-axis robot, its virtual equivalent and a model based vision system used as feed-back. The user is immersed in a 3-D space built out of models of the robot's environment. He directly interacts with the virtual \"components\", defining tasks and dynamically optimizing them. A model based vision system locates objects in the real workspace to update the VRRE through a bi-directional communication link. In order to enhance the capabilities of the VRRE, a reflex-type behavior based on vision has been implemented. By locally (independently of the VRRE) controlling the real robot, the operator is discharged of small environmental changes due to transmission delays. Thus once the tasks have been optimized on the VRRE, they are sent to the real robot and a semi autonomous process ensures their correct execution thanks to a camera directly mounted on the robot's end effector. On the other hand if the environmental changes are too important, the robot stops, re-actualizes the VRRE with the new environmental configuration, and waits for task redesign. Because the operator interacts with the robotic system at a task oriented high level, VRRE systems are easily portable to other robotics environments (mobile robotics and micro assembly).","PeriodicalId":199941,"journal":{"name":"Proceedings Virtual Reality Annual International Symposium '95","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132920584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
BrickNet: sharing object behaviors on the Net BrickNet:在网络上共享对象行为
Proceedings Virtual Reality Annual International Symposium '95 Pub Date : 1995-03-11 DOI: 10.1109/VRAIS.1995.512475
Gurminder Singh, L. Serra, Willie Png, Audrey Wong, N. Hern
{"title":"BrickNet: sharing object behaviors on the Net","authors":"Gurminder Singh, L. Serra, Willie Png, Audrey Wong, N. Hern","doi":"10.1109/VRAIS.1995.512475","DOIUrl":"https://doi.org/10.1109/VRAIS.1995.512475","url":null,"abstract":"In a majority of networked virtual worlds, object sharing is limited to object geometries only. The BrickNet toolkit extends the sharing of objects to include dynamic object behaviors. This is achieved by combining a structured organizational paradigm for virtual worlds with an interpreted language. Sharing in virtual worlds is handled by transferring the program code that builds the structure and executes the behavior. The range of behaviors that can be shared in BrickNet include simple behaviors, virtual world dependent behaviors, reactive behaviors and capability-based behaviors.","PeriodicalId":199941,"journal":{"name":"Proceedings Virtual Reality Annual International Symposium '95","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125007332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
Using texture maps to correct for optical distortion in head-mounted displays 使用纹理贴图来纠正头戴式显示器的光学畸变
Proceedings Virtual Reality Annual International Symposium '95 Pub Date : 1995-03-11 DOI: 10.1109/VRAIS.1995.512493
B. Watson, L. Hodges
{"title":"Using texture maps to correct for optical distortion in head-mounted displays","authors":"B. Watson, L. Hodges","doi":"10.1109/VRAIS.1995.512493","DOIUrl":"https://doi.org/10.1109/VRAIS.1995.512493","url":null,"abstract":"This paper describes a fast method of correcting for optical distortion in head-mounted displays (HMDs). Since the distorted display surface in an HMD is not rectilinear, the shape and location of the graphics window used with the display must be chosen carefully, and some corrections made to the predistortion model. A distortion correction might be performed with optics that reverse the distortion caused by HMD lenses, but such optics can be expensive and offer a correction for only one specific HMD. Integer incremental methods or a lookup table might be used to calculate the correction, but an I/O bottleneck makes this impractical in software. Instead, a texture map may be defined that approximates the required optical correction. Recent equipment advances allow undistorted images to be input into texture mapping hardware at interactive rates. Built in filtering handles predistortion aliasing artifacts.","PeriodicalId":199941,"journal":{"name":"Proceedings Virtual Reality Annual International Symposium '95","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132619012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Realizing the full potential of virtual reality: human factors issues that could stand in the way 实现虚拟现实的全部潜力:可能阻碍的人为因素问题
Proceedings Virtual Reality Annual International Symposium '95 Pub Date : 1995-03-11 DOI: 10.1109/VRAIS.1995.512476
K. Stanney
{"title":"Realizing the full potential of virtual reality: human factors issues that could stand in the way","authors":"K. Stanney","doi":"10.1109/VRAIS.1995.512476","DOIUrl":"https://doi.org/10.1109/VRAIS.1995.512476","url":null,"abstract":"Reviews several significant human factors issues that could stand in the way of virtual reality realizing its full potential. These issues involve maximizing human performance efficiency in virtual environments, minimizing health and safety issues, and circumventing potential social issues through proactive assessment.","PeriodicalId":199941,"journal":{"name":"Proceedings Virtual Reality Annual International Symposium '95","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114427808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 127
Pen-based force display for precision manipulation in virtual environments 基于笔的力显示,用于虚拟环境中的精确操作
Proceedings Virtual Reality Annual International Symposium '95 Pub Date : 1995-03-11 DOI: 10.1109/VRAIS.1995.512499
Pietro Buttolo, B. Hannaford
{"title":"Pen-based force display for precision manipulation in virtual environments","authors":"Pietro Buttolo, B. Hannaford","doi":"10.1109/VRAIS.1995.512499","DOIUrl":"https://doi.org/10.1109/VRAIS.1995.512499","url":null,"abstract":"We describe the structure of a force display recently implemented for precision manipulation of scaled or virtual environments. We discuss the advantages of direct-drive parallel manipulators over geared serial manipulators for human-robot interaction application and introduce the serial-parallel structure we chose for our robot which interfaces with the human operator either at the fingertip or at the tip of a freely held pen-like instrument. We derive the statics and the dynamics, and then introduce the optimization criteria that allowed us to choose the dimensional parameters for the force display. Finally we show some of the potential application for this device.","PeriodicalId":199941,"journal":{"name":"Proceedings Virtual Reality Annual International Symposium '95","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129553307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
EM-an environment manager for building networked virtual environments em—用于构建网络虚拟环境的环境管理器
Proceedings Virtual Reality Annual International Symposium '95 Pub Date : 1995-03-11 DOI: 10.1109/VRAIS.1995.512474
Qunjie Wang, Mark W. Green, Christopher D. Shaw
{"title":"EM-an environment manager for building networked virtual environments","authors":"Qunjie Wang, Mark W. Green, Christopher D. Shaw","doi":"10.1109/VRAIS.1995.512474","DOIUrl":"https://doi.org/10.1109/VRAIS.1995.512474","url":null,"abstract":"The Environment Manager (EM) is a high-level tool for constructing both single-user and multi-user virtual environments. A script file is used to initialize and run virtual worlds. Independent applications can share information and cooperate with each other across the Internet. EM reduces the effort required to produce a networked virtual world by providing high-level support for application replication, network configuration, communication management and concurrency control. This paper describes the architecture and implementation of EM.","PeriodicalId":199941,"journal":{"name":"Proceedings Virtual Reality Annual International Symposium '95","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134560322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Virtual-reality monitoring 虚拟现实的监控
Proceedings Virtual Reality Annual International Symposium '95 Pub Date : 1995-03-11 DOI: 10.1109/VRAIS.1995.512479
H. Hoffman, Keith C. Hullfish, Stacey J. Houston
{"title":"Virtual-reality monitoring","authors":"H. Hoffman, Keith C. Hullfish, Stacey J. Houston","doi":"10.1109/VRAIS.1995.512479","DOIUrl":"https://doi.org/10.1109/VRAIS.1995.512479","url":null,"abstract":"Investigates whether subjects could separate memories of events experienced in virtual reality from real and imagined events: a decision process we term 'virtual-reality monitoring'. Participants studied 8 separate spatial configurations of red geometric objects arranged on a life-sized chessboard, 8 configurations in virtual reality (an immersive, computer-simulated world), and imagined objects in 8 other configurations. On a later source identification memory test, subjects were generally able to correctly identify the sources of the events. A 'memory characteristics questionnaire' was administered to assess differences in qualitative characteristics of memories for virtual, real and imagined events. Differences were found that could potentially serve as cues to help people decide where their memories originated. Results are interpreted within the Johnson-Raye (1981) theoretical framework.","PeriodicalId":199941,"journal":{"name":"Proceedings Virtual Reality Annual International Symposium '95","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128610270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
A simple and efficient method for accurate collision detection among deformable polyhedral objects in arbitrary motion 一种简单有效的任意运动可变形多面体物体间精确碰撞检测方法
Proceedings Virtual Reality Annual International Symposium '95 Pub Date : 1995-03-11 DOI: 10.1109/VRAIS.1995.512489
Andrew Smith, Y. Kitamura, H. Takemura, F. Kishino
{"title":"A simple and efficient method for accurate collision detection among deformable polyhedral objects in arbitrary motion","authors":"Andrew Smith, Y. Kitamura, H. Takemura, F. Kishino","doi":"10.1109/VRAIS.1995.512489","DOIUrl":"https://doi.org/10.1109/VRAIS.1995.512489","url":null,"abstract":"We propose an accurate collision detection algorithm for use in virtual reality applications. The algorithm works for three-dimensional graphical environments where multiple objects, represented as polyhedra (boundary representation), are undergoing arbitrary motion (translation and rotation). The algorithm can be used directly for both convex and concave objects and objects can be deformed (non-rigid) during motion. The algorithm works efficiently by first reducing the number of face pairs that need to be checked accurately for interference by first localizing possible collision regions using bounding box and spatial subdivision techniques; face pairs that remain after this pruning stage are then accurately checked for interference. The algorithm is efficient, simple to implement, and does not require any memory intensive auxiliary data structures to be precomputed and updated. Since polyhedral shape representation is one of the most common shape representation schemes, this algorithm should be useful to a wide audience. Performance results are given to show the efficiency of the proposed method.","PeriodicalId":199941,"journal":{"name":"Proceedings Virtual Reality Annual International Symposium '95","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116594824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Visual resolution and spatial performance: the trade-off between resolution and interactivity 视觉分辨率和空间表现:分辨率和交互性之间的权衡
Proceedings Virtual Reality Annual International Symposium '95 Pub Date : 1995-03-11 DOI: 10.1109/VRAIS.1995.512481
G. Smets, K. Overbeeke
{"title":"Visual resolution and spatial performance: the trade-off between resolution and interactivity","authors":"G. Smets, K. Overbeeke","doi":"10.1109/VRAIS.1995.512481","DOIUrl":"https://doi.org/10.1109/VRAIS.1995.512481","url":null,"abstract":"A series of experiments is reported in which subjects performed a search-and-act spatial task in conditions of reduced resolution and exploratory freedom. Images were produced using miniature cameras, comparing static camera position, passive camera movement, and head-coupled immersive VR/teleoperation conditions. By using cameras and real light, time lags could be avoided. Video processors were used to artificially reduce spatial, and temporal resolutions. Results show that although spatial and intensity resolutions are very important in static viewing conditions, like those of traditional image-producing computer graphics, subjects can complete the puzzle in head-mounted (VR-like) conditions with resolutions as little as 18/spl times/15 pixels. Furthermore results show that animation of the image viewpoint does not always improve spatial performance when the animation is not user-controlled; in some conditions performance actually got worse by adding passive movement.","PeriodicalId":199941,"journal":{"name":"Proceedings Virtual Reality Annual International Symposium '95","volume":"321 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115838734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A vision-based head tracker for fish tank virtual reality-VR without head gear 一个基于视觉的头部跟踪器,用于鱼缸虚拟现实- vr无头戴设备
Proceedings Virtual Reality Annual International Symposium '95 Pub Date : 1995-03-11 DOI: 10.1109/VRAIS.1995.512484
J. Rekimoto
{"title":"A vision-based head tracker for fish tank virtual reality-VR without head gear","authors":"J. Rekimoto","doi":"10.1109/VRAIS.1995.512484","DOIUrl":"https://doi.org/10.1109/VRAIS.1995.512484","url":null,"abstract":"A practical and robust head-position tracking method using computer vision is presented. By combining two simple image processing techniques, this tracker can, report the position of the user's head in real time. Whole image processing is performed by software running on normal mid-range workstations. This tracker can support desk top virtual reality (also referred to as \"fish tank VR\"), thereby enabling a user to use a wide range of 3D systems Without having to the on any equipment. An experiment conducted by the author suggests this tracker can improve the human's ability in understanding complex 3D structures presented on the display.","PeriodicalId":199941,"journal":{"name":"Proceedings Virtual Reality Annual International Symposium '95","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116049823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书