{"title":"CrowdAR Table - An AR Table for Interactive Crowd Simulation","authors":"Wolfgang Hürst, Roland Geraerts, Yiran Zhao","doi":"10.1109/AIVR46125.2019.00070","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00070","url":null,"abstract":"In this paper we describe a prototype implementation of an augmented reality (AR) system for accessing and interacting with crowd simulation software. We identify a target audience and tasks (access to the software in a science museum) motivate the choice of AR system (an interactive table complemented with handheld AR via smartphones) and describe its implementation. Our system has been realized in a prototypical implementation verifying its feasibility and potential. Detailed user testing will be part of our future work.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114557135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Case Study on Visual-Inertial Odometry using Supervised, Semi-Supervised and Unsupervised Learning Methods","authors":"Yuan Tian, M. Compere","doi":"10.1109/AIVR46125.2019.00043","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00043","url":null,"abstract":"This paper presents a pilot study comparing three different learning-based visual-inertial odometry (VIO) approaches: supervised, semi-supervised, and unsupervised. Localization and navigation have been the ancient bur important topic in both research area and industry. Many well-developed algorithms have been established regarding this research task using a single sensor or multiple sensors. VIO, that uses images and inertial measurements to estimate the motion, is considered as one of the key technologies to virtual reality and argument reality. With the rapid development of artificial intelligence technology, people have started to explore new methods for VIO instead of traditional feature-based methods. The advantages of using learning-based method can be found in eliminating the calibration and enhance the robustness and accuracy. However, most of the popular learning-based VIO systems require ground truth during training. The lack of training dataset limits the power of neural networks. In this study, we proposed both semi-supervised and unsupervised methods and compared the performances between the supervised model and them. The neural networks were trained and tested on two well-known datasets: KITTI Dataset and EuRoC MAV Dataset.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128827170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ferdinand de Coninck, Zerrin Yumak, G. Sandino, R. Veltkamp
{"title":"Non-Verbal Behavior Generation for Virtual Characters in Group Conversations","authors":"Ferdinand de Coninck, Zerrin Yumak, G. Sandino, R. Veltkamp","doi":"10.1109/AIVR46125.2019.00016","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00016","url":null,"abstract":"We present an approach to synthesize non-verbal behaviors for virtual characters during group conversations. We employ a probabilistic model and use Dynamic Bayesian Networks to find the correlations between the conversational state and non-verbal behaviors. The parameters of the network are learned by annotating and analyzing the CMU Panoptic dataset. The results are evaluated in comparison to the ground truth data and with user experiments. The behaviors can be generated online and have been integrated with the animation engine of a game company specialized in Virtual Reality applications for Cognitive Behavioral Therapy. To our knowledge, this is the first study that takes into account a data-driven approach to automatically generate non-verbal behaviors during group interactions.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129688846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Live Emoji: A Live Storytelling VR System with Programmable Cartoon-Style Emotion Embodiment","authors":"Zhenjie Zhao, Feng Han, Xiaojuan Ma","doi":"10.1109/AIVR46125.2019.00057","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00057","url":null,"abstract":"We introduce a novel cartoon-style hybrid emotion embodiment model for live storytelling in virtual reality (VR). It contains an avatar with six basic emotions and an auxiliary multimodal display to enhance the expression of emotions. We further design and implement a system to teleoperate the embodiment model in VR for live storytelling. Specifically, 1) we design a novel visual programming tool that allows users to customize emotional effects based on the emotion embodiment model; 2) we design a novel face tracking module to map presenters' emotional states to the avatar in VR. Our web-based implementation makes the application easy to use. This is an accompanying paper extracted from [1] for the demo session in IEEE AIVR 2019.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123086771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vishnunarayan Girishan Prabhu, Courtney Linder, L. Stanley, Robert Morgan
{"title":"An Affective Computing in Virtual Reality Environments for Managing Surgical Pain and Anxiety","authors":"Vishnunarayan Girishan Prabhu, Courtney Linder, L. Stanley, Robert Morgan","doi":"10.1109/AIVR46125.2019.00049","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00049","url":null,"abstract":"Pain and anxiety are common accompaniments of surgery. About 90% of people indicate elevated levels of anxiety during pre-operative care, and 66% of the people report moderate to high levels of pain immediately after surgery. Currently, opioids are the primary method for pain management during postoperative care, and approximately one in 16 surgical patients prescribed opioids becomes a long-term user. This, along with the current opioid epidemic crisis calls for alternative pain management mechanisms. This research focuses on utilizing affective computing techniques to develop and deliver an adaptive virtual reality experience based on the user's physiological response to reduce pain and anxiety. Biofeedback is integrated with a virtual environment utilizing the user's heart rate variability, respiration, and electrodermal activity. Early results from Total Knee Arthroplasty patients undergoing surgery at Patewood Memorial Hospital in Greenville, SC demonstrate promising results in the management of pain and anxiety during pre and post-operative care.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123100857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tinglin Duan, Parinya Punpongsanon, Shengxin Jia, D. Iwai, Kosuke Sato, K. Plataniotis
{"title":"Remote Environment Exploration with Drone Agent and Haptic Force Feedback","authors":"Tinglin Duan, Parinya Punpongsanon, Shengxin Jia, D. Iwai, Kosuke Sato, K. Plataniotis","doi":"10.1109/AIVR46125.2019.00034","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00034","url":null,"abstract":"Camera drones allow exploring remote scenes that are inaccessible or inappropriate to visit in person. However, these exploration experiences are often limited due to insufficient scene information provided by front cameras, where only 2D images or videos are supplied. Combining a camera drone vision with haptic feedback would augment users' spatial understandings of the remote environment. But such designs are usually difficult for users to learn and apply, due to the complexity of the system and unfluent UAV control. In this paper, we present a new telepresence system for remote environment exploration, with a drone agent controlled by a VR mid-air panel. The drone is capable of generating real-time location and landmark details using integrated Simultaneous Location and Mapping (SLAM). The SLAMs' point cloud generations are produced using RGB input, and the results are passed to a Generative Adversarial Network (GAN) to reconstruct 3D remote scenes in real-time. The reconstructed objects are taken advantage of by haptic devices which could improve user experience through haptic rendering. Capable of providing both visual and haptic feedback, our system allows users to examine and exploit remote areas without having to be physically present. An experiment has been conducted to verify the usability of 3D reconstruction result in haptic feedback rendering.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115799403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Santos, Isabela Chambers, E. V. Brazil, M. Moreno
{"title":"Structuring and Inspecting 3D Anchors for Seismic Volume into Hyperknowledge Base in Virtual Reality","authors":"W. Santos, Isabela Chambers, E. V. Brazil, M. Moreno","doi":"10.1109/AIVR46125.2019.00063","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00063","url":null,"abstract":"Seismic data is a source of information geoscientists use to investigate underground regions to look for resources to explore. Such data are volumetric and noisy, and thus challenging to visualize, which motivated the research of new computational systems to assist the expert, such as visualization methods, signal processing, and machine learning models. We propose a system that aids geologists, geophysicists, and related experts in the domain in interpreting seismic data in virtual reality (VR). The system uses a hyperknowledge base (HKBase), which structures regions of interest (ROIs) as anchors with semantics from the user to the system and vice-versa. For instance, through the HKBase, the user can load and inspect the output from AI systems or give new inputs and feedback in the same way. We ran tests with experts to evaluate the system in their tasks to collect feedback and new insights on how the software could transform their routines. In accordance with our results, we claim we took one step forward for VR in the oil & gas industry by creating a valuable experience in the task of seismic interpretation.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115858030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extracting Specific Voice from Mixed Audio Source","authors":"Kunihiko Sato","doi":"10.1109/AIVR46125.2019.00039","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00039","url":null,"abstract":"We propose auditory diminished reality by a deep neural network (DNN) extracting a single speech signal from a mixture of sounds containing other speakers and background noise. To realize the proposed DNN, we introduce a new dataset comprised of multi-speakers and environment noises. We conduct evaluations for measuring the source separation quality of the DNN. Additionally, we compare the separation quality of models learned with different amounts of training data. As a result, we found there is no significant difference in the separation quality between 10 and 30 minutes of the target speaker's speech length for training data.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"133 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130891198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristin Chow, Aline Normoyle, Jeanette Nicewinter, Clark L. Erickson, N. Badler
{"title":"Crowd and Procession Hypothesis Testing for Large-Scale Archaeological Sites","authors":"Kristin Chow, Aline Normoyle, Jeanette Nicewinter, Clark L. Erickson, N. Badler","doi":"10.1109/AIVR46125.2019.00069","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00069","url":null,"abstract":"Our goal is to construct parameterized, spatially and temporally situated simulations of large-scale public ceremonies. Especially in pre-historic contexts, these activities lack precise records and must be hypothesized from material remains, documentary sources, and cultural context. Given the number of possible variables, we are building a computational system SPACES (Spatialized Performance And Ceremonial Event Simulations), that rapidly creates variations that may be both visually (qualitatively) and quantitatively assessed. Of particular interest are processional movements of crowds through a large-scale, navigationally complex, and semantically meaningful site, while exhibiting individual contextual emotional, performative, and ceremonially realistic behaviors.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126503361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu-Wei Wang, Tse-Yu Pan, Yung-Ju Chang, Min-Chun Hu
{"title":"Vibration Feedback Controlled by Intuitive Pressing in Virtual Reality","authors":"Yu-Wei Wang, Tse-Yu Pan, Yung-Ju Chang, Min-Chun Hu","doi":"10.1109/AIVR46125.2019.00041","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00041","url":null,"abstract":"To provide more immersive experience in VR, high-fidelity vibrotactile feedback is one of the most important task to make VR user can feel virtual objects. In this work, we propose a mobile-based vibrotactile feedback system called FingerVIP, which provides an intuitive and efficient way for VR application/game designers to input proper vibration configuration of each target vibrotactile feedback. Our system uses pressure sensors attached on fingers as the controllers to manipulate the vibration configuration, including amplitude, frequency, and time duration. We utilized the proposed FingerVIP to set three kinds of vibrotactile feedback in a VR sports game and validated that FingerVIP successfully helped game designers reduce the number of iteration and the time for configuring vibration.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126527073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}