Kiwan Han, J. Ku, Hyeongrae Lee, Jinsick Park, Sangwoo Cho, Jae-Jin Kim, I. Kim, Sun I. Kim
{"title":"Measurement of Expression Characteristics in Emotional Situations using Virtual Reality","authors":"Kiwan Han, J. Ku, Hyeongrae Lee, Jinsick Park, Sangwoo Cho, Jae-Jin Kim, I. Kim, Sun I. Kim","doi":"10.1109/VR.2009.4811047","DOIUrl":"https://doi.org/10.1109/VR.2009.4811047","url":null,"abstract":"Expressions are a basic necessity for daily living, as they are required for managing relationships with other people. Conventional expression training has difficulty achieving an objective measurement, because their assessment depends on the therapist's ability to assess a patient's state or training effectiveness. In addition, it is difficult to provide emotional and social situations in the same manner for each training or assessment session. Virtual reality techniques can overcome shortcomings occurring in conventional studies by providing exact and objective measurements and emotional and social situations. In this study, we developed a virtual reality prototype that could present emotional situation and measure expression characteristics. Although this is a preliminary study, it could be considered that this study shows the potential of virtual reality as an assessment tool.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133818036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert J. Teather, Andriy Pavlovych, W. Stuerzlinger
{"title":"Effects of Latency and Spatial Jitter on 2D and 3D Pointing","authors":"Robert J. Teather, Andriy Pavlovych, W. Stuerzlinger","doi":"10.1109/VR.2009.4811029","DOIUrl":"https://doi.org/10.1109/VR.2009.4811029","url":null,"abstract":"We investigate the effects of input device latency and spatial jitter on 2D pointing tasks and a 3D movement. First, we characterize jitter and latency in a 3D tracking device and an optical mouse used for baseline comparison. We present an experiment based on ISO 9241-9, which measures performance of pointing devices. We added latency and jitter to the mouse and compared it to a 3D tracker. Results indicate that latency has a stronger effect on performance than small spatial jitter. A second experiment found that erratic jitter \"spikes\" can affect 3D movement performance.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133037482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Heliodon: Spatially Augmented Reality for Architectural Daylighting Design","authors":"Yu Sheng, Theodore C. Yapo, C. Young, B. Cutler","doi":"10.1109/VR.2009.4811000","DOIUrl":"https://doi.org/10.1109/VR.2009.4811000","url":null,"abstract":"We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the inter-reflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115884364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effective Presentation Technique of Scent Using Small Ejection Quantities of Odor","authors":"Junta Sato, Kaori Ohtsu, Yuichi Bannai, Ken-ichi Okada","doi":"10.1109/VR.2009.4811015","DOIUrl":"https://doi.org/10.1109/VR.2009.4811015","url":null,"abstract":"Trials on the transmission of olfactory information together with audio/visual information are currently underway. However, a problem exists in that continuous emission of scent leaves scent in the air causing human olfactory adaptation. To resolve this problem, we aimed at minimizing the quantity of scent ejected using an ink-jet olfactory display developed. Following the development of a breath sensor for breath synchronization, we next developed an olfactory ejection system to present scent on each inspiration. We then measured human olfactory characteristics in order to determine the most suitable method for presenting scent on an inspiration. Experiments revealed that the intensity of scent perceived by the user was altered by differences in the presentation method even when the quantity of scent was unchanged. We present here a method of odor presentation that most effectively minimizes the ejection quantities.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124960784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Image-Warping Architecture for VR: Low Latency versus Image Quality","authors":"F. Smit, R. V. Liere, S. Beck, B. Fröhlich","doi":"10.1109/VR.2009.4810995","DOIUrl":"https://doi.org/10.1109/VR.2009.4810995","url":null,"abstract":"Designing low end-to-end latency system architectures for virtual reality is still an open and challenging problem. We describe the design, implementation and evaluation of a client-server depth-image warping architecture that updates and displays the scene graph at the refresh rate of the display. Our approach works for scenes consisting of dynamic and interactive objects. The end-to-end latency is minimized as well as smooth object motion generated. However, this comes at the expense of image quality inherent to warping techniques. We evaluate the architecture and its design trade-offs by comparing latency and image quality to a conventional rendering system. Our experience with the system confirms that the approach facilitates common interaction tasks such as navigation and object manipulation.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115299798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samantha L. Finkelstein, A. Nickel, Lane Harrison, Evan A. Suma, T. Barnes
{"title":"cMotion: A New Game Design to Teach Emotion Recognition and Programming Logic to Children using Virtual Humans","authors":"Samantha L. Finkelstein, A. Nickel, Lane Harrison, Evan A. Suma, T. Barnes","doi":"10.1109/VR.2009.4811039","DOIUrl":"https://doi.org/10.1109/VR.2009.4811039","url":null,"abstract":"This paper presents the design of the final stage of a new game currently in development, entitled cMotion, which will use virtual humans to teach emotion recognition and programming concepts to children. Having multiple facets, cMotion is designed to teach the intended users how to recognize facial expressions and manipulate an interactive virtual character using a visual drag-and-drop programming interface. By creating a game which contextualizes emotions, we hope to foster learning of both emotions in a cultural context and computer programming concepts in children. The game will be completed in three stages which will each be tested separately: a playable introduction which focuses on social skills and emotion recognition, an interactive interface which focuses on computer programming, and a full game which combines the first two stages into one activity.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127161425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Steptoe, Oyewole Oyekoya, A. Murgia, R. Wolff, John P Rae, Estefania Guimaraes, D. Roberts, A. Steed
{"title":"Eye Tracking for Avatar Eye Gaze Control During Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments","authors":"W. Steptoe, Oyewole Oyekoya, A. Murgia, R. Wolff, John P Rae, Estefania Guimaraes, D. Roberts, A. Steed","doi":"10.1109/VR.2009.4811003","DOIUrl":"https://doi.org/10.1109/VR.2009.4811003","url":null,"abstract":"In face-to-face collaboration, eye gaze is used both as a bidirectional signal to monitor and indicate focus of attention and action, as well as a resource to manage the interaction. In remote interaction supported by Immersive Collaborative Virtual Environments (ICVEs), embodied avatars representing and controlled by each participant share a virtual space. We report on a study designed to evaluate methods of avatar eye gaze control during an object-focused puzzle scenario performed between three networked CAVETM-like systems. We compare tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye trackers worn by participants, to a gaze model informed by head orientation for saccade generation, and static gaze featuring non-moving eyes. We analyse task performance, subjective user experience, and interactional behaviour. While not providing statistically significant benefit over static gaze, tracked gaze is observed as the highest performing condition. However, the gaze model resulted in significantly lower task performance and increased error rate.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121934079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Issues with Virtual Space Perception within Reaching Distance: Mitigating Adverse Effects on Applications Using HMDs in the Automotive Industry","authors":"Mathias Moehring, Antje Gloystein, R. Dörner","doi":"10.1109/VR.2009.4811027","DOIUrl":"https://doi.org/10.1109/VR.2009.4811027","url":null,"abstract":"Besides visual validations of virtual car models, immersive applications like a Virtual Seating Buck enable car designers and engineers to decide product related issues without building expensive hardware prototypes. For replacing real models, it is mandatory that decision makers can rely on VR-based findings. However, especially when using a Head Mounted Display, users complain about an unnatural perception of space. Such misperceptions have already been reported in literature where several evaluation methods have been proposed for researching possible causes. Unfortunately, most of the methods do not represent the scenarios usually found in the automotive industry, since they focus on too large distances of five to fifteen meters. In this paper, we present an evaluation scenario adapted to size and distance perception within the reach of the user. With this method, we analyzed our standard setups and found a systematic error that is lower than aberrations reported by earlier research work. Furthermore, we tried to mitigate perception errors by a Depth of Field Blur applied to the virtual images.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124234143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sangwoo Cho, J. Ku, Kiwan Han, Hyeongrae Lee, Jinsick Park, Y. Kang, I. Kim, Sun I. Kim
{"title":"Effect of Proprioception Training of patient with Hemiplegia by Manipulating Visual Feedback using Virtual Reality: The Preliminary results","authors":"Sangwoo Cho, J. Ku, Kiwan Han, Hyeongrae Lee, Jinsick Park, Y. Kang, I. Kim, Sun I. Kim","doi":"10.1109/VR.2009.4811056","DOIUrl":"https://doi.org/10.1109/VR.2009.4811056","url":null,"abstract":"In this study, we confirmed proprioception training effect of patients with hemiplegia by manipulating visual feedback. Six patients with hemiplegia were participated in the experiment. Patients have trained with the reaching task with visual feedback without visual feedback for two weeks. Patients were evaluated with pre-, middle test and post-test with the task with and without visual feedback. In the results, the first-click error distance after the training of the reaching task was reduced when they got the training with the task removed visual feedback. In addition, the performance velocity profile of reaching movement formed an inverse U shape after the training. In conclusion, visual feedback manipulation using virtual reality could provide a tool for training reaching movement by enforcing to use their proprioception, which enhances reaching movement skills for patients with hemiplegia.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"2 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134245750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Matsukura, Hitoshi Yoshida, H. Ishida, T. Nakamoto
{"title":"Interactive Odor Playback Based on Fluid Dynamics Simulation","authors":"H. Matsukura, Hitoshi Yoshida, H. Ishida, T. Nakamoto","doi":"10.1109/VR.2009.4811042","DOIUrl":"https://doi.org/10.1109/VR.2009.4811042","url":null,"abstract":"This article describes the experiments on an interactive application of an olfactory display system into which computational fluid dynamics (CFD) simulation is incorporated. In the proposed system, the olfactory display is used to add special effects to movies and virtual reality systems by releasing odors relevant to the scenes shown on the computer screen. To provide high-presence olfactory stimuli to the users, a model of the environment shown in the scene is provided to a CFD solver. The airflow field in the environment and the dispersal of odor molecules from their source are then calculated. An odor blender is used to generate the odor with the concentration determined based on the calculated odor distribution. In the experiments, a virtual room was presented on a PC monitor, and the panel were asked to stroll in the room to find an odor source. The results showed the effectiveness of the CFD simulation in reproducing the spatial distribution of the odor in the virtual space.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134223178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}