Ivar Troost, Ghazaleh Tanhaei, L. Hardman, Wolfgang Hürst
{"title":"DatAR: An Immersive Literature Exploration Environment for Neuroscientists","authors":"Ivar Troost, Ghazaleh Tanhaei, L. Hardman, Wolfgang Hürst","doi":"10.1109/AIVR50618.2020.00020","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00020","url":null,"abstract":"Maintaining an overview of publications in the neuroscientific field is challenging, especially with an eye to finding relations at scale; for example, between brain regions and diseases. This is true for well-studied as well as nascent relationships. To support neuroscientists in this challenge, we developed an Immersive Analytics (IA) prototype for the analysis of relationships in large collections of scientific papers. In our video demonstration we showcase the system’s design and capabilities using a walkthrough and mock user scenario. This companion paper relates our prototype to previous IA work and offers implementation details.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128999916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extended Abstract: CoShopper - Leveraging Artificial Intelligence for an Enhanced Augmented Reality Grocery Shopping Experience","authors":"Yasmeen Alhamdan, Saif Alabachi, N. Khan","doi":"10.1109/AIVR50618.2020.00069","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00069","url":null,"abstract":"This paper presents a system that integrates Artificial Intelligence (AI) methods with Augmented Reality (AR) techniques to enhance grocery shopping experience through the use of smart glasses. Our proposed framework deploys a Convolutional Neural Network (CNN) object detection model that allows for item identification. By simultaneously retrieving data from a large nutrition database, personal medical reports, and other grocery store related datasets, our intelligent system is able to provide user-centric nutrition facts, health and wellness tips, and unhealthy selection warnings that are augmented on a real time broadcasting of the smart glasses. Our state-of-the-art framework CoShopper demonstrates high accuracy in detecting grocery items, improves product selection, increases cost efficiency, and reduces the time spent in the process. Video demo of CoShopper can be viewed at [shorturl.at/mqIPX]","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114998296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MIXR: A Standard Architecture for Medical Image Analysis in Augmented and Mixed Reality","authors":"Benjamin Allison, Xujiong Ye, Faraz Janan","doi":"10.1109/AIVR50618.2020.00053","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00053","url":null,"abstract":"Medical image analysis is evolving into a new dimension: where it will combine the power of AI and machine learning with real-time, real-space displays, namely Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) - known collectively as Extended Reality (XR). These devices, typically available as head-mounted displays, are enabling the move towards the complete transformation of how medical data is viewed, processed and analysed in clinical practice. There have been recent attempts on how XR gadgets can help in surgical planning and training of medics. However, the radiological front from a detection, diagnostics and prognosis remains unexplored. In this paper we propose a standard framework or architecture called Medical Imaging in Extended Reality (MIXR) for building medical image analysis applications in XR. MIXR consists of several components used in literature; however, tied together for reconstructing volume data in 3D space. Our focus here is on the reconstruction mechanism for CT and MRI data in XR; nevertheless, the framework we propose has applications beyond these modalities.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116460022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hameedullah Farooki, E. Cansizoglu, Jae-Woo Choi, Tomer Weiss
{"title":"Interactive and Scalable Layout Synthesis with Design Templates","authors":"Hameedullah Farooki, E. Cansizoglu, Jae-Woo Choi, Tomer Weiss","doi":"10.1109/AIVR50618.2020.00049","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00049","url":null,"abstract":"The design of virtual and real spaces is a complex task, as is evidenced by the large number of professionals offering their services. Researchers proposed multiple computational methods that aim to alleviate such complexity. Unfortunately, most methods for layout synthesis are not directly applicable to non-professional consumers, because of usability challenges in terms of computation, user input, and scalability. Hence, we propose a novel layout synthesis system based on design templates. Design templates define geometrical rules for creating rooms, according to the room type and furniture function. With such templates, our system allows a customizable user experience, and is computationally fast while remaining scalable. We demonstrate our method with several example layouts, focusing on both small and large rooms.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114598824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Delvigne, H. Wannous, Jean-Philippe Vandeborre, L. Ris, T. Dutoit
{"title":"Attention Estimation in Virtual Reality with EEG based Image Regression","authors":"V. Delvigne, H. Wannous, Jean-Philippe Vandeborre, L. Ris, T. Dutoit","doi":"10.1109/AIVR50618.2020.00012","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00012","url":null,"abstract":"Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder affecting a certain amount of children and their way of living. A novel method to treat this disorder is to use Brain-Computer Interfaces (BCI) throughout the patient learns to self-regulate his symptoms by herself. In this context, researches have led to tools aiming to estimate the attention toward these interfaces. In parallel, the democratization of virtual reality (VR) headset, and the fact that it produces valid environments for several aspects: safe, flexible and ecologically valid have led to an increase of its use for BCI application. Another point is that Artificial Intelligence (AI) is more and more developed in different domain among which medical application. In this paper, we present an innovative method aiming to estimate attention from the measurement of physiological signals: Electroencephalogram (EEG), gaze direction and head movement. This framework is developed to assess attention in VR environments. We propose a novel approach for feature extraction and a dedicated Machine Learning model. The pilot study has been applied on a set of volunteer and our approach presents a lower error rate in comparison with the state of the art methods.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130558763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inverse Kinematics and Temporal Convolutional Networks for Sequential Pose Analysis in VR","authors":"David C. Jeong, Jackie Jingyi Xu, L. Miller","doi":"10.1109/AIVR50618.2020.00056","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00056","url":null,"abstract":"Drawing from a recent call to advance generalizability and causal inference in psychological science using contextually representative research designs [1], we introduce a conceptual framework that integrates techniques in machine perception of poses with VR-driven inverse kinematic character animation, leveraging the Unity game engine to mediate between the human user and the machine learner. This Computational Virtual Reality (C-VR) system contains the following components: a) Human motion capture (VR), b) Human to avatar character animation (inverse kinematics), c) character animation recordings (virtual cameras), d) avatar pose detection (OpenPose), d) avatar pose classification (SVM), and e) sequential avatar moving pose analyses (TCN). By leveraging the precision in representation afforded in virtual environments and agents and the precision in perception afforded in computer vision and machine learning in a unified system, we may take steps towards understanding a wider range of human complexity.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125425362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Problems with Physical Simulation in a Virtual Lego-based Assembly Task using Unity3D Engine","authors":"Nor Farzana Syaza Jeffri, D. R. A. Rambli","doi":"10.1109/AIVR50618.2020.00060","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00060","url":null,"abstract":"Augmented Reality (AR) has potential in the manufacturing and manual assembly industry. However, the effectiveness of AR as an assistive system depends on several factors, one of which is information presentation. A set of guidelines for the effective interface design of Augmented Reality systems for manual assembly have been proposed. To validate the design guidelines, it was decided to use a simulated AR approach using Virtual Reality (VR). To evaluate the simulated AR system, it is necessary to also simulate the manual assembly task. This paper describes the challenges faced when developing the manual assembly task simulation, particularly when simulating physical interactions in VR using the Unity3D engine.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125987521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A QoE and Visual Attention Evaluation on the Influence of Spatial Audio in 360 Videos","authors":"Amit Hirway, Yuansong Qiao, Niall Murray","doi":"10.1109/AIVR50618.2020.00071","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00071","url":null,"abstract":"Recently, there has been growing interest from academia and industry on the application of immersive technologies across a range of domains. Once such technology, 360° video, can be captured using an omnidirectional multi-camera arrangement. These 360° videos can then be rendered via Virtual Reality (VR) Head Mounted Displays (HMD). Viewers then have the freedom to look around the scene in any direction they wish. Whereas a body of work exists that focused on modeling visual attention (VA) in VR, little research has considered the impact of the audio modality on VA in VR. It is well accepted that audio has an important role in VR experiences. High quality spatial audio offers listeners the opportunity to experience sound in all directions. One such technique, Ambisonics or 3D audio, offers a complete 360° soundscape. This paper reports the results of an empirical study that looked at understanding how (if at all) spatial audio influences visual attention in 360° videos. It also assessed the impact of spatial audio on the user’s Quality of Experience (QoE) by capturing implicit, explicit, and objective metrics. The results suggest surprisingly similar explicit QoE ratings for both the spatial and non-spatial audio environments. The implicit metrics indicate that users integrated with the spatial environment more quickly than the non-spatial environment. Users who experienced the spatial audio environment had a higher maximum mean head pose pitch value and were found to be more focused towards the sound-emitting regions in the spatial audio environment experiences.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127970191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Immersive Node-Link Visualization of Artificial Neural Networks for Machine Learning Experts","authors":"M. Bellgardt, C. Scheiderer, T. Kuhlen","doi":"10.1109/AIVR50618.2020.00015","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00015","url":null,"abstract":"The black box problem of artificial neural networks (ANNs) is still a very relevant issue. When communicating basic concepts of ANNs, they are often depicted as node-link diagrams. Despite this being a straight forward way to visualize them, it is rarely used outside an educational context. However, we hypothesize that large-scale node-link diagrams of full ANNs could be useful even to machine learning experts. Hence, we present a visualization tool that depicts convolutional ANNs as node-link diagrams using immersive virtual reality. We applied our tool to a use-case in the field of machine learning research and adapted it to the specific challenges. Finally, we performed an expert review to evaluate the usefulness of our visualization. We found that our node-link visualization of ANNs was perceived as helpful in this professional context.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129855910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Roth, M. Jording, Tobias Schmee, Peter Kullmann, N. Navab, K. Vogeley
{"title":"Towards Computer Aided Diagnosis of Autism Spectrum Disorder Using Virtual Environments","authors":"Daniel Roth, M. Jording, Tobias Schmee, Peter Kullmann, N. Navab, K. Vogeley","doi":"10.1109/AIVR50618.2020.00029","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00029","url":null,"abstract":"Autism Spectrum Disorders (ASD) are neurodevelopmental disorders that are associated with characteristic difficulties to express and interpret nonverbal behavior, such as social gaze behavior. The state of the art in diagnosis is the clinical interview that is time intensive for the clinicians and does not take into account any objective measures of behavior. We herewith propose an empirical approach that can potentially support diagnosis based on the assessment of nonverbal behavior in avatar-mediated interactions in virtual environments. In a first study, ASD individuals and a typically developed control group were interacting in dyads. Head motion, and eye gaze of both interlocutors were recorded, replicated to the avatars and displayed to the partner through a distributed virtual environment. The nonverbal behavior of both interaction partners was recorded, and resulting preprocessed data was classified with up to 92.9parcent classification accuracy, with the amount of eye area focus and the average horizontal gaze change being the most relevant features. We expect that such systems could improve the diagnostic assessment on the basis of objective measures of nonverbal behavior.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126406853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}