Cheng-Pan Hsieh, Shih-Kai Lee, Ya-Yi Liao, R. Huang, Jung-Hua Wang
{"title":"Binarization Using Morphological Decomposition Followed by cGAN","authors":"Cheng-Pan Hsieh, Shih-Kai Lee, Ya-Yi Liao, R. Huang, Jung-Hua Wang","doi":"10.1109/AIVR46125.2019.00044","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00044","url":null,"abstract":"This paper presents a novel binarization scheme for stained decipherable patterns. First, the input image is downsized, which not only saves the computation time, but the key features necessary for the successful decoding is preserved. Then, high or low contrast areas are decomposed by applying morphological operators to the downsized gray image, and subtracting the two resulting output images from each other. If necessary, these areas are further subjected to decomposition to obtain finer separation of regions. After the preprocessing, the binarization can be done either by GMM to estimate a binarization threshold for each region, or the binarization problem is treated as an image-translation task and hence the conditional generative adversarial network (cGAN) is trained using the high or low contrast areas as conditional inputs.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121901725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Nguyen, Kwanghee Jung, Seung-Chul Yoo, Seungman Kim, Sohyun Park, M. Currie
{"title":"Civil War Battlefield Experience: Historical Event Simulation using Augmented Reality Technology","authors":"V. Nguyen, Kwanghee Jung, Seung-Chul Yoo, Seungman Kim, Sohyun Park, M. Currie","doi":"10.1109/AIVR46125.2019.00068","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00068","url":null,"abstract":"In recent years, with the development of modern technology, Virtual Reality (VR) has been proven as an effective means for entertaining and encouraging learning processes. Users immerse themselves in a 3D environment to experience situations that are very difficult or impossible to encounter in real life, such as volcanoes, ancient buildings, or events on a battlefield. Augmented Reality (AR), on the other hand, takes a different approach by allowing users to remain in their physical world while virtual objects are overlaid on physical ones. In education and tourism, VR and AR are becoming platforms for student learning and tourist attractions. Although several studies have been conducted to promote cultural preservation, they are mostly focused on VR for historical building visualization. The use of AR for simulating an event is relatively uncommon, especially for a battlefield simulation. This paper presents a work-in-progress, specifically a web-based AR application that enables both students and tourists to witness a series of battlefield events occurring at the Battle of Palmito Ranch, located near Brownsville, Texas. With markers embedded directly into the printed map, users can experience the last battle of the Civil War in the US.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129430173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combining Pairwise Feature Matches from Device Trajectories for Biometric Authentication in Virtual Reality Environments","authors":"A. Ajit, N. Banerjee, Sean Banerjee","doi":"10.1109/AIVR46125.2019.00012","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00012","url":null,"abstract":"In this paper we provide an approach to perform seamless continual biometric authentication of users in virtual reality (VR) environments by combining position and orientation features from the headset, right hand controller, and left hand controller of a VR system. The rapid growth of VR in mission critical applications in military training, flight simulation, therapy, manufacturing, and education necessitates authentication of users based on their actions within the VR space as opposed to traditional PIN and password based approaches. To mimic goal-oriented interactions as they may occur in VR environments, we capture a VR dataset of trajectories from 33 users throwing a ball at a virtual target with 10 samples per user captured on a training day, and 10 samples on a test day. Due to the sparseness in the number of training samples per user, typical of realistic interactions, we perform authentication by using pairwise relationships between trajectories. Our approach uses a perceptron classifier to learn weights on the matches between position and orientation features on two trajectories from the headset and the hand controllers, such that a low classifier score is obtained for trajectories belonging to the same user, and a high score is obtained otherwise. We also perform extensive evaluation on the choice of position and orientation features, combination of devices, and choice of match metrics and trajectory alignment method on the accuracy, and demonstrate a maximum accuracy of 93.03% for matching 10 test actions per user by using orientation from the right hand controller and headset.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123848982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arno Hartholt, Edward Fast, Adam Reilly, W. Whitcup, Matt Liewer, S. Mozgai
{"title":"Ubiquitous Virtual Humans: A Multi-platform Framework for Embodied AI Agents in XR","authors":"Arno Hartholt, Edward Fast, Adam Reilly, W. Whitcup, Matt Liewer, S. Mozgai","doi":"10.1109/AIVR46125.2019.00072","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00072","url":null,"abstract":"We present an architecture and framework for the development of virtual humans for a range of computing platforms, including mobile, web, Virtual Reality (VR) and Augmented Reality (AR). The framework uses a mix of in-house and commodity technologies to support audio-visual sensing, speech recognition, natural language processing, nonverbal behavior generation and realization, text-to-speech generation, and rendering. This work builds on the Virtual Human Toolkit, which has been extended to support computing platforms beyond Windows. The resulting framework maintains the modularity of the underlying architecture, allows re-use of both logic and content through cloud services, and is extensible by porting lightweight clients. We present the current state of the framework, discuss how we model and animate our characters, and offer lessons learned through several use cases, including expressive character animation in seated VR, shared space and navigation in room-scale VR, autonomous AI in mobile AR, and real-time user performance feedback based on mobile sensors in headset AR.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125673770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Cansizoglu, Hantian Liu, Tomer Weiss, Archi Mitra, Dhaval Dholakia, Jae-Woo Choi, D. Wulin
{"title":"Room Style Estimation for Style-Aware Recommendation","authors":"E. Cansizoglu, Hantian Liu, Tomer Weiss, Archi Mitra, Dhaval Dholakia, Jae-Woo Choi, D. Wulin","doi":"10.1109/AIVR46125.2019.00062","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00062","url":null,"abstract":"Interior design is a complex task as evident by multitude of professionals, websites, and books, offering design advice. Additionally, such advice is highly subjective in nature since different experts might have different interior design opinions. Our goal is to offer data-driven recommendations for an interior design task that reflects an individual's room style preferences. We present a style-based image suggestion framework to search for room ideas and relevant products for a given query image. We train a deep neural network classifier by focusing on high volume classes with high-agreement samples using a VGG architecture. The resulting model shows promising results and paves the way to style-aware product recommendation in virtual reality platforms for 3D room design.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122271404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Open-Source Physiological Computing Framework using Heart Rate Variability in Mobile Virtual Reality Applications","authors":"Luis Quintero, P. Papapetrou, J. Muñoz","doi":"10.1109/AIVR46125.2019.00027","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00027","url":null,"abstract":"Electronic and mobile health technologies are posed as a tool that can promote self-care and extend coverage to bridge the gap in accessibility to mental care services between low-and high-income communities. However, the current technology-based mental health interventions use systems that are either cumbersome, expensive or require specialized knowledge to be operated. This paper describes the open-source framework PARE-VR, which provides heart rate variability (HRV) analysis to mobile virtual reality (VR) applications. It further outlines the advantages of the presented architecture as an initial step to provide more scalable mental health therapies in comparison to current technical setups; and as an approach with the capability to merge physiological data and artificial intelligence agents to provide computing systems with user understanding and adaptive functionalities. Furthermore, PARE-VR is evaluated with a feasibility study using a specific relaxation exercise with slow-paced breathing. The aim of the study is to get insights of the system performance, its capability to detect HRV metrics in real-time, as well as to identify changes between normal and slow-paced breathing using the HRV data. Preliminary results of the study, with the participation of eleven volunteers, showed high engagement of users towards the VR activity, and demonstrated technical potentialities of the framework to create physiological computing systems using mobile VR and wearable smartwatches for scalable health interventions. Several insights and recommendations were concluded from the study for enhancing the HRV analysis in real-time and conducting future similar studies.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131972243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Situation-Adaptive Object Grasping Recognition in VR Environment","authors":"Koki Hirota, T. Komuro","doi":"10.1109/AIVR46125.2019.00035","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00035","url":null,"abstract":"In this paper, we propose a method for recognizing grasping of virtual objects in VR environment. The proposed method utilizes the fact that the position and shape of the virtual object to be grasped are known. A camera acquires an image of the user grasping a virtual object, and the posture of the hand is extracted from that image. The obtained hand posture is used to classify whether it is a grasping action or not. In order to evaluate the proposed method, we created a new dataset that was specialized for grasping virtual objects with a bare hand. There were three shapes and three positions of virtual objects in the dataset. The recognition rate of the classifier that was trained using the dataset with specific shapes of virtual objects was 93.18 %, and that with all the shapes of virtual objects was 87.71 %. This result shows that the recognition rate was improved by training the classifier using the shape-dependent dataset.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114353064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Design Process for Enhancing Visual Expressive Qualities of Characters from Performance Capture into Virtual Reality","authors":"Victoria Campbell","doi":"10.1109/AIVR46125.2019.00067","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00067","url":null,"abstract":"In designing performances for virtual reality one must consider the unique qualities of the VR medium in order to deliver expressive character performance. This means that the design requirements for participant engagement and immersion must evolve to address these new possibilities. To address the need for evolving an expressive character performance for VR, a five step production framework is proposed which addresses steps of directing, performance capture, the cyclical stages of retargeting and animation refinement and movement translation to avatars in VR.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127448696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Luxenburger, Jonas Mohr, Torsten Spieldenner, Dieter Merkel, Fabio Andres Espinosa Valcarcel, Tim Schwartz, Florian Reinicke, Julian Ahlers, Markus Stoyke
{"title":"Extended Abstract: Augmented Reality for Human-Robot Cooperation in Aircraft Assembly","authors":"A. Luxenburger, Jonas Mohr, Torsten Spieldenner, Dieter Merkel, Fabio Andres Espinosa Valcarcel, Tim Schwartz, Florian Reinicke, Julian Ahlers, Markus Stoyke","doi":"10.1109/AIVR46125.2019.00052","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00052","url":null,"abstract":"This extended abstract and the accompanying demonstration video show how Augmented Reality (AR) can be used in an industrial setting to coordinate a hybrid team consisting of a human worker and two robots in order to rivet stringers and ties to an aircraft hull.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127042958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning on VR-Induced Attention","authors":"Gang Li, Muhammad Adeel Khan","doi":"10.1109/AIVR46125.2019.00033","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00033","url":null,"abstract":"Some evidence suggests that virtual reality (VR) approaches may lead to a greater attentional focus than experiencing the same scenarios presented on computer monitors. The aim of this study is to differentiate attention levels captured during a perceptual discrimination task presented on two different viewing platforms, standard personal computer (PC) monitor and head-mounted-display (HMD)-VR, using a well-described electroencephalography (EEG)-based measure (parietal P3b latency) and deep learning-based measure (that is EEG features extracted by a compact convolutional neural network-EEGNet and visualized by a gradient-based relevance attribution method-DeepLIFT). Twenty healthy young adults participated in this perceptual discrimination task in which according to a spatial cue they were required to discriminate either a \"Target\" or \"Distractor\" stimuli on the screen of viewing platforms. Experimental results show that the EEGNet-based classification accuracies are highly correlated with the p values of statistical analysis of P3b. Also, the visualized EEG features are neurophysiologically interpretable. This study provides the first visualized deep learning-based EEG features captured during an HMD-VR-based attentional task.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132495093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}