Hiroo Iwata, H. Yano, Takahiro Uemura, Tetsuro Moriya
{"title":"Food simulator: a haptic interface for biting","authors":"Hiroo Iwata, H. Yano, Takahiro Uemura, Tetsuro Moriya","doi":"10.1109/VR.2004.40","DOIUrl":"https://doi.org/10.1109/VR.2004.40","url":null,"abstract":"The food simulator is a haptic interface that presents biting force. The taste of food arises from a combination of chemical, auditory, olfactory and haptic sensation. Haptic sensation while eating has been an ongoing problem in taste display. The food simulator generates a force on the user's teeth as an indication of food texture. The device is composed of four linkages. The mechanical configuration of the device is designed such that it will fit into the mouth, with a force sensor attached to the end effector. The food simulator generates a force representing the force profile captured from the mouth of a person biting real food. The device has been integrated with auditory and chemical display for multi-modal sensations in a taste the food simulator has been tested on a large number of participants. The results indicate that the device has succeeded in presenting food texture as well as chemical taste.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129723069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Yanagida, S. Kawato, H. Noma, A. Tomono, N. Tetsutani
{"title":"Projection based olfactory display with nose tracking","authors":"Y. Yanagida, S. Kawato, H. Noma, A. Tomono, N. Tetsutani","doi":"10.1109/VR.2004.62","DOIUrl":"https://doi.org/10.1109/VR.2004.62","url":null,"abstract":"Most attempts to realize an olfactory display have involved capturing and synthesizing the odor, processes that still pose many challenging problems. These difficulties are mainly due to the mechanism of human olfaction, in which a set of so-called \"primary odors\" has not been found. Instead, we focus on spatio-temporal control of odor rather than synthesizing odor itself. Many existing interactive olfactory displays simply diffuse the scent into the air, which does not provide the ability of spatio-temporal control of olfaction. Recently, however, several researchers have developed olfactory displays that inject scented air under the nose through tubes. On the analogy of visual displays, these systems correspond to head-mounted displays (HMD). These yield a solid way to achieve spatio-temporal control of olfactory space, but they require the user to wear something on his or her face. Here, we propose an unencumbering olfactory display that does not require the user to attach anything on the face. It works by projecting a clump of scented air from a location near the user's nose through free space. We also aim to display a scent to the restricted space around a specific user's nose, rather than scattering scented air by simply diffusing it into the atmosphere. To implement this concept, we used an \"air cannon\" that generates toroidal vortices of the scented air. We conducted a preliminary experiment to examine this method's ability to display scent to a restricted space. The results show that we could successfully display incense to the target user. Next, we constructed prototype systems. We could successfully bring the scented air to a specific user by tracking the nose position of the user and controlling the orientation of the air cannon to the user's nose.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116547870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient, intuitive user interfaces for classroom-based immersive virtual environments","authors":"D. Bowman, M. Gracey, John F. Lucas","doi":"10.1109/VR.2004.38","DOIUrl":"https://doi.org/10.1109/VR.2004.38","url":null,"abstract":"The educational benefits of immersive virtual environments (VEs) have long been touted, but very few immersive VEs have been used in a classroom setting. We have developed three educational VEs and deployed them in university courses. A key element in the success of these applications is a simple but powerful user interface (UI) that requires no training, yet allows students to interact with the virtual world in meaningful ways. We discuss the design of this UI and the results of an evaluation of its usability in university classrooms.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132200735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Increasing the effective egocentric field of view with proprioceptive and tactile feedback","authors":"Ungyeon Yang, G. Kim","doi":"10.1109/VR.2004.44","DOIUrl":"https://doi.org/10.1109/VR.2004.44","url":null,"abstract":"Multimodality often exhibits synergistic effects: each modality compliments and compensates for other modalities in transferring coherent, unambiguous, and enriched information for higher interaction efficiency and improved sense of presence. In this paper, we explore one such phenomenon: a positive interaction among the geometric field of view, proprioceptive interaction, and tactile feedback. We hypothesize that, with proprioceptive interaction and tactile feedback, the geometric field of view and thus visibility can be increased such that it is larger than the physical field of view, without causing a significant distortion in the user's distance perception. This, in turn, would further help operation of the overall multimodal interaction scheme as the user is more likely to receive the multimodal feedback simultaneously. We tested our hypothesis with an experiment to measure the user's change in distance perception according to different values of egocentric geometric field of view and feedback conditions. Our experimental results have shown that, when coupled with physical interaction, the GFOV could be increased by up to 170 percent of the physical field of view without introducing significant distortion in distance perception. Second, when tactile feedback was introduced, in addition to visual and proprioceptive cues, the GFOV could be increased by up to 200 percent. The results offer a useful guideline for effectively utilizing of modality compensation and building multimodal interfaces for close range spatial tasks in virtual environments. In addition, it demonstrates one way to overcome the shortcomings of the narrow (physical) fields of views of most contemporary HMDs.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124044787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive scene synchronization for virtual and mixed reality environments","authors":"Felix G. Hamza-Lup, J. Rolland","doi":"10.1109/VR.2004.9","DOIUrl":"https://doi.org/10.1109/VR.2004.9","url":null,"abstract":"Technological advances in virtual environments facilitate the creation of distributed collaborative environments, in which the distribution of three-dimensional content at remote locations allows efficient and effective communication of ideas. One of the challenges in distributed shared environments is maintaining a consistent view of the shared information, in the presence of inevitable network delays and variable bandwidth. A consistent view in a shared 3D scene may significantly increase the sense of presence among participants and improve their interactivity. This paper introduces an adaptive scene synchronization algorithm and a framework for integration of the algorithm in a distributed real-time virtual environment. In spite of significant network delays, results show that objects can be synchronous in their viewpoint at multiple remotely located sites. Furthermore residual asynchronicity is quantified as a function of network delays and scalability.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving collision detection in distributed virtual environments by adaptive collision prediction tracking","authors":"Jan Ohlenburg","doi":"10.1109/VR.2004.43","DOIUrl":"https://doi.org/10.1109/VR.2004.43","url":null,"abstract":"Collision detection for dynamic objects in distributed virtual environments is still an open research topic. The problems of network latency and available network bandwidth prevent exact common solutions. The consistency-throughput tradeoff states that a distributed virtual environment cannot be consistent and highly dynamic at the same time. Remote object visualization is used to extrapolate and predict the movement of remote objects reducing the bandwidth required for good approximations of the remote objects. Few update messages aggravate the effect of network latency for collision detection. In this paper, new approach extending remote object visualization techniques is demonstrated to improve the results of collision detection in distributed virtual environments. We showed how this can significantly reduce the approximation errors caused by remote object visualization techniques. This is done by predicting collisions between remote objects and adaptively changing the parameters of these techniques.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114829978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Creating VR scenes using fully automatic derivation of motion vectors","authors":"Kensuke Habuka, Y. Shinagawa","doi":"10.1109/VR.2004.36","DOIUrl":"https://doi.org/10.1109/VR.2004.36","url":null,"abstract":"We propose a new method to create smooth VR scenes using a limited number of images and the motion vectors among them. We will discuss two specific components to simulate a majority of VR scenes: MV VR Object and MV VR Panorama. They provide similar functions to QuickTime VR Object, and QuickTime VR Panorama (Chen and Williams, 1993). However, our method can interpolate between the existing images, and therefore, smooth movement of viewpoints is achieved. When we look at a primitive from arbitrary viewpoints, the images of an object associated with the primitive is transformed according to the motion vectors and to the location of the viewpoint.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125666040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resolving object references in multimodal dialogues for immersive virtual environments","authors":"Thies Pfeiffer, Marc Erich Latoschik","doi":"10.1109/VR.2004.67","DOIUrl":"https://doi.org/10.1109/VR.2004.67","url":null,"abstract":"This paper describes the underlying concepts and the technical implementation of a system for resolving multi-modal references in virtual reality (VR). In this system the temporal and semantic relations intrinsic to referential utterances are expressed as a constraint satisfaction problem, where the propositional value of each referential unit during a multimodal dialogue updates incrementally the active set of constraints. As the system is based on findings of human cognition research it also regards, e.g., constraints implicitly assumed by human communicators. The implementation takes VR related real-time and immersive conditions into account and adapts its architecture to well known scene-graph based design patterns by introducing a so-called reference resolution engine. Regarding the conceptual work as well as regarding the implementation, special care has been taken to allow further refinements and modifications to the underlying resolving processes on a high level basis.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115053492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Co-location and tactile feedback for 2D widget manipulation","authors":"A. Kok, R. V. Liere","doi":"10.1109/VR.2004.14","DOIUrl":"https://doi.org/10.1109/VR.2004.14","url":null,"abstract":"This study investigated the effect of co-location and tactile feedback on 2D widget manipulation tasks in virtual environments. Task completion time and positioning accuracy during each task were measured for subjects under 4 situations (co-located vs no co-located and tactile feedback vs no tactile feedback). Performance results indicate that co-location and tactile feedback both significantly improve the performance of 2D widget manipulation in 3D virtual environments. Subjective results support these findings.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129810878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testbed evaluation of navigation and text display techniques in an information-rich virtual environment","authors":"Jian Chen, P. Pyla, Doug A. Bowman","doi":"10.1109/VR.2004.73","DOIUrl":"https://doi.org/10.1109/VR.2004.73","url":null,"abstract":"The fundamental question for an information-rich virtual environment is how to access and display abstract information. We investigated two existing navigation techniques: hand-centered object manipulation extending ray-casting (HOMER) and go-go navigation, and two text layout techniques: within-the-world display (WWD) and heads-up display (HUD). Four search tasks were performed to measure participants' performance in a densely packed environment. HUD enabled significantly better performance than WWD and the go-go technique enabled better performance than the HOMER technique for most of the tasks. We found that using HOMER navigation combined with the WWD technique was significantly worse than other combinations for difficult naive search tasks. Users also preferred the combination of go-go and HUD for all tasks.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114212010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}