B. Felbrich, Gwyllim Jahn, Cameron Newnham, A. Menges
{"title":"Self-Organizing Maps for Intuitive Gesture-Based Geometric Modelling in Augmented Reality","authors":"B. Felbrich, Gwyllim Jahn, Cameron Newnham, A. Menges","doi":"10.1109/AIVR.2018.00016","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00016","url":null,"abstract":"Modelling three-dimensional virtual objects in the context of architectural, product and game design requires elaborate skill in handling the respective CAD software and is often tedious. We explore the potentials of Kohonen networks, also called self-organizing maps (SOM) as a concept for intuitive 3D modelling aided through mixed reality. We effectively provide a computational \"clay\" that can be pulled, pushed and shaped by picking and placing control objects with an augmented reality headset. Our approach benefits from combining state of the art CAD software with GPU computation and mixed reality hardware as well as the introduction of custom SOM network topologies and arbitrary data dimensionality. The approach is demonstrated in three case studies.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115440758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edward Y. Chang, Shih-Wei Liao, Chun-Ting Liu, Wei-Chen Lin, Pin-Wei Liao, Wei-Kang Fu, Chung-Huan Mei, Emily J. Chang
{"title":"DeepLinQ: Distributed Multi-Layer Ledgers for Privacy-Preserving Data Sharing","authors":"Edward Y. Chang, Shih-Wei Liao, Chun-Ting Liu, Wei-Chen Lin, Pin-Wei Liao, Wei-Kang Fu, Chung-Huan Mei, Emily J. Chang","doi":"10.1109/AIVR.2018.00037","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00037","url":null,"abstract":"This paper presents requirements to DeepLinQ and its architecture. DeepLinQ proposes a multi-layer blockchain architecture to improve flexibility, accountability, and scalability through on-demand queries, proxy appointment, subgroup signatures, granular access control, and smart contracts in order to support privacy-preserving distributed data sharing. In this data-driven AI era where big data is the prerequisite for training an effective deep learning model, DeepLinQ provides a trusted infrastructure to enable training data collection in a privacy-preserved way. This paper uses healthcare data sharing as an application example to illustrate key properties and design of DeepLinQ.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128715014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Nelis, Stefan Desmet, Jeroen Wauters, R. Haelterman, Erwin Borgers, Dimitri Kun
{"title":"Virtual Crime Scene","authors":"J. Nelis, Stefan Desmet, Jeroen Wauters, R. Haelterman, Erwin Borgers, Dimitri Kun","doi":"10.1109/AIVR.2018.00035","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00035","url":null,"abstract":"To improve the quality of murder investigations, the Belgian Federal Police decided to implement new technologies. The goal is to create a virtual representation of the crime scene. A 3D LIDAR scan will serve as the foundation of the virtual crime scene. Using the data from the scan, the crime scene will be recreated in Virtual Reality, which makes it possible for the investigators, investigating magistrates, judges, members of the jury and attorneys to walk around the crime scene, without ever needing to be there in person. Besides the laser scan, the virtual crime scene will also consist of various other kinds of data that were collected at the crime scene. Furthermore, there is an important collaboration with the ballistic expert and the employees of the university hospital in Leuven to reconstruct and visualize bullet trajectories.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114249920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Reality Conferencing","authors":"P. Pazour, A. Janecek, H. Hlavacs","doi":"10.1109/AIVR.2018.00019","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00019","url":null,"abstract":"The development and availability of immersive VR devices has risen significantly in recent years. Combined with the latest milestones in computer graphics and the current motion tracking devices a certain feeling of presence inside the virtual world can be achieved which is aiming (or wishing) to be indistinguishable from the real world in the long term. Although this aspiration may be visionary at the present day, the possibilities of collaborative human interactions within a virtual reality system are already manifold. In this work we focus on the development and evaluation of a virtual reality conferencing application prototype supporting personalized user-based avatars for up to four persons joining remotely in a virtual conference room. The prototype has been evaluated in terms of the feeling of presence experienced inside the applications virtual environment when wearing a virtual reality headset and when using the application in desktop mode, respectively. The results of the conducted experiments indicate a positive impact on the feeling of presence by users wearing virtual reality headsets as compared to the group without headsets.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121055039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Converting Natural Language Text to ROS-Compatible Instruction Base","authors":"Takondwa Kakusa, M. Hsiao","doi":"10.1109/AIVR.2018.00051","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00051","url":null,"abstract":"Natural Language processing is a growing field. Although it is difficult to create a natural language system that can robustly react to and handle every situation, it is quite possible to design the system to react to specific instructions or scenario. The contributions of this work are (1) to design a set of instruction types that can allow for conditional statements within natural language instructions, (2) to create a modular system using Robot Operating System (ROS) in order to allow for more robust communication and integration, and (3) to allow for an interconnection between the written text and derived instructions that will make the sentence construction more seamless and natural for the user. As the results will show, this system can be run on a diverse set of sentence structures, allowing for robust paragraphs. This system must also then be carefully set to fit the exact parameters that the user is looking for, trying to strike the balance between how much the user needs to learn and how accurate to the instruction the system needs to run.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115697758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Garzotto, Nicolò Messina, Vito Matarazzo, Riccardo Facchini, Lukasz Moskwa, G. Oliva
{"title":"Exploiting the Integration of Wearable Virtual Reality and Bio-Sensors for Persons with Neurodevelopmental Disorders","authors":"F. Garzotto, Nicolò Messina, Vito Matarazzo, Riccardo Facchini, Lukasz Moskwa, G. Oliva","doi":"10.1109/AIVR.2018.00031","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00031","url":null,"abstract":"Wearable Virtual Reality (WVR) is thought to offer a powerful approach in the treatment of subjects with Neurodevelopmental Disorders (NDD), e.g., to improve attention skills and autonomy. We propose to integrate WVR applications with wearable bio-sensors. The visualization of the information extracted from these devices, integrated with measures derived from interaction logs, would help therapists monitor the patient's state and attention levels during a WVR experience. The comparison of results along different sessions would facilitate the assessment of patients' improvements. This approach can be exploited to complement more traditional observation-based evaluation methods or clinical tests and can support evidence-based research on the effectiveness of Wearable VR for persons with NDD.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"13 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133268619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kening Zhu, A. Lugmayr, Xiaojuan Ma, F. Mueller, U. Engelke, S. Simoff, Tomas Trescak, A. Bogdanovych, J. Rodríguez-Aguilar, Huyen Vu
{"title":"Interface and Experience Design with AI for VR/AR (DAIVAR'18) and AI/ML for Immersive Simulations (AMISIM'18)","authors":"Kening Zhu, A. Lugmayr, Xiaojuan Ma, F. Mueller, U. Engelke, S. Simoff, Tomas Trescak, A. Bogdanovych, J. Rodríguez-Aguilar, Huyen Vu","doi":"10.1109/AIVR.2018.00052","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00052","url":null,"abstract":"Within this work, we present the merged workshops \"Interface and Experience Design with AI for VR/AR (DAIVA'18)\" and \"AI and ML for Immersive Simulations\". Both workshops have been held within the context of the IEEE Artificial Intelligence Virtual Reality (AIVR) conference in Taiwan in 2018. We introduce the goals, topics, and basic ideas of both workshops, and present some basic literature in the domain for further reading.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128565714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manuel Braunschweiler, Steven Poulakos, Mubbasir Kapadia, R. Sumner
{"title":"A Two-Level Planning Framework for Mixed Reality Interactive Narratives with User Engagement","authors":"Manuel Braunschweiler, Steven Poulakos, Mubbasir Kapadia, R. Sumner","doi":"10.1109/AIVR.2018.00021","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00021","url":null,"abstract":"We present an event-based interactive storytelling system for virtual 3D environments that aims to offer free-form user experiences while constraining the narrative to follow author intent. The characters of our stories are represented as smart objects, each having their own state and set of capabilities that they expose to the virtual world. Our narratives are represented as a collection of branching stories, where narrative flow is controlled by author-defined states. A user model is employed to evaluate the user's engagement with smart objects and events, based on proximity, interaction patterns and visibility to the user. A two-level online planning system is designed to find the best narrative trajectory along pre-authored stories, according to the user model, and to generate a story sequence to the best trajectory with Monte Carlo Tree Search. We present the capabilities of our interactive storytelling system on an example story and describe the adaptations required for modeling user engagement in AR and VR applications.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128991792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Natural Language Programming Application for Lego Mindstorms EV3","authors":"Yue Zhan, M. Hsiao","doi":"10.1109/AIVR.2018.00043","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00043","url":null,"abstract":"In this paper, a controlled natural language (CNL) based program synthesis system for the Lego Mindstorms EV3 (EV3) is introduced. The system is developed with the intention of helping middle and high school Lego robotics enthusiasts and non-programmers to learn the necessary skills for programming and engineering the robot with less effort. The system generates the resulting code in Microsoft Small Basic that controls the EV3 Intelligent Brick with supports for all EV3 sensors and motors. Preliminary results show that our approach is capable of generating functional, executable code based on the users’ controlled natural language specifications. Detailed error messages are also given when confronted with unimplementable sentences.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117095492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Large Scale Information Marker Coding for Augmented Reality Using Graphic Code","authors":"Bruno Patrão, Leandro Cruz, Nuno Gonçalves","doi":"10.1109/AIVR.2018.00027","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00027","url":null,"abstract":"In this work, we will present some Graphic Code uses related to large scale information coding applied to Aug-mented Reality. Machine Readable Codes (MRCs) are broadlyused for many reasons. However, they are mostly based onsmall information (like URLs, id numbers, phone numbers, visitcard, etc.). The recently introduced Graphic Code differs fromclassical MRCs because it is very well integrated with imagesfor aesthetic control (Graphic Code has its aesthetic value thanclassical MRCs). Furthermore, it is able to code a large amountof information, thus it can also store other kinds of models (likemeshes, images, sketches, etc.) for applications that are unusualfor classical MRCs. The main advantage of using our approachas an Augmented Reality marker is the possibility of creatinggeneric applications that can read and decode these GraphicCode markers, which might contain 3D models and complexscenes encoded in it. Additionally, the resulting marker has strongaesthetic characteristics associated to it once it is generated fromany chosen base image.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115693569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}