{"title":"Towards Method Time Measurement Identification Using Virtual Reality and Gesture Recognition","authors":"Abdelkader Bellarbi, J. Jessel, Laurent Da Dalto","doi":"10.1109/AIVR46125.2019.00040","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00040","url":null,"abstract":"Methods-Time Measurement (MTM) is a predetermined motion time system that is used primarily in industrial settings to analyze the methods used to perform any manual operation. In this paper, we introduce a system for automatic generation of MTM codes using only head and both hands 3D tracking. Our approach relies on the division of gestures into small elementary movements. Then, we built a decision tree to aggregate these elementary movements in order to generate the realized MTM code. The proposed system does not need any pre-learning step, and it can be useful in both virtual environments to train technicians and in real cases with industrial workshops to assist experts for MTM code identification. Obtained results are satisfying and promising. This work is in progress, we plan to improve it in the near future.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117287671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DatAR: Your Brain, Your Data, On Your Desk - A Research Proposal","authors":"Ghazaleh Tanhaei, L. Hardman, Wolfgang Huerst","doi":"10.1109/AIVR46125.2019.00029","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00029","url":null,"abstract":"We present a research proposal that investigates the use of 3D representations in Augmented Reality (AR) to allow neuroscientists to explore literature they wish to understand for their own scientific purposes. Neuroscientists need to identify potential real-life experiments they wish to perform that provide the most information for their field with the minimum use of limited resources. This requires understanding both the already known relationships among concepts and those that have not yet been discovered. Our assumption is that by providing overviews of the correlations among concepts through the use of linked data, these will allow neuroscientists to better understand the gaps in their own literature and more quickly identify the most suitable experiments to carry out. We will identify candidate visualizations and improve upon these for a specific information need. We describe our planned prototype 3D AR implementation and directions we intend to explore.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133726800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Hawes, Robert J. Teather, A. Arya, Max Krichenbauer
{"title":"Assessing the Value of 3D Software Experience with Camera Layout in Virtual Reality","authors":"Daniel Hawes, Robert J. Teather, A. Arya, Max Krichenbauer","doi":"10.1109/AIVR46125.2019.00037","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00037","url":null,"abstract":"Preproduction is a critical step in creating 3D animated content for film and TV. The current process is slow, costly, and creatively challenging, forcing the layout director (LD) to interpret and create 3D worlds and camera directions from 2D drawings. Virtual reality (VR) offers the potential to make the process faster, cheaper, and more accessible. We conducted a user study evaluating the effectiveness of VR as a preproduction tool, specifically focusing on prior 3D modeling experience as an independent variable. We assessed the performance of experienced 3D software participants to those with no experience. Participants were tasked with laying out a camera shot for an animated scene. Our results revealed that the experienced 3D software participants did not significantly outperform their non-experienced counterparts. Overall, our study suggests that VR may provide an effective platform for animation pre-production, \"leveling the playing field\" for users with limited 3D software experience and broadening the talent pool of potential LDs.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133581473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Title page iii]","authors":"","doi":"10.1109/aivr46125.2019.00002","DOIUrl":"https://doi.org/10.1109/aivr46125.2019.00002","url":null,"abstract":"","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133337812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using CNNs For Users Segmentation In Video See-Through Augmented Virtuality","authors":"Pierre-Olivier Pigny, L. Dominjon","doi":"10.1109/AIVR46125.2019.00048","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00048","url":null,"abstract":"In this paper, we present preliminary results on the use of deep learning techniques to integrate the user's self-body and other participants into a head-mounted video see-through augmented virtuality scenario. It has been previously shown that seeing user's bodies in such simulations may improve the feeling of both self and social presence in the virtual environment, as well as user performance. We propose to use a convolutional neural network for real time semantic segmentation of users' bodies in the stereoscopic RGB video streams acquired from the perspective of the user. We describe design issues as well as implementation details of the system and demonstrate the feasibility of using such neural networks for merging users' bodies in an augmented virtuality simulation.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123858795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johanna Vielhaben, H. Camalan, W. Samek, M. Wenzel
{"title":"Viewport Forecasting in 360° Virtual Reality Videos with Machine Learning","authors":"Johanna Vielhaben, H. Camalan, W. Samek, M. Wenzel","doi":"10.1109/AIVR46125.2019.00020","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00020","url":null,"abstract":"Objective. Virtual reality (VR) cloud gaming and 360° video streaming are on the rise. With a VR headset, viewers can individually choose the perspective they see on the head-mounted display by turning their head, which creates the illusion of being in a virtual room. In this experimental study, we applied machine learning methods to anticipate future head rotations (a) from preceding head and eye motions, and (b) from the statistics of other spherical video viewers. Approach. Ten study participants watched each 3 1/3 hours of spherical video clips, while head and eye gaze motions were tracked, using a VR headset with a built-in eye tracker. Machine learning models were trained on the recorded head and gaze trajectories to predict (a) changes of head orientation and (b) the viewport from population statistics. Results. We assembled a dataset of head and gaze trajectories of spherical video viewers with great stimulus variability. We extracted statistical features from these time series and showed that a Support Vector Machine can classify the range of future head movements with a time horizon of up to one second with good accuracy. Even population statistics among only ten subjects show prediction success above chance level. %Both approaches resulted in a considerable amount of prediction success using head movements, but using gaze movement did not contribute to prediction performance in a meaningful way. Even basic machine learning models can successfully predict head movement and aspects thereof, while being naive to visual content. Significance. Viewport forecasting opens up various avenues to optimize VR rendering and transmission. While the viewer can see only a section of the surrounding 360° sphere, the entire panorama has typically to be rendered and/or broadcast. The reason is rooted in the transmission delay, which has to be taken into account in order to avoid simulator sickness due to motion-to-photon latencies. Knowing in advance, where the viewer is going to look at may help to make cloud rendering and video streaming of VR content more efficient and, ultimately, the VR experience more appealing.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116918435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aashiq Shaikh, Linda D. Nguyen, A. Bahremand, Hannah Bartolomea, F. Liu, Van Nguyen, Derrick M. Anderson, R. Likamwa
{"title":"Coordinate: A Spreadsheet-Programmable Augmented Reality Framework for Immersive Map-Based Visualizations","authors":"Aashiq Shaikh, Linda D. Nguyen, A. Bahremand, Hannah Bartolomea, F. Liu, Van Nguyen, Derrick M. Anderson, R. Likamwa","doi":"10.1109/AIVR46125.2019.00028","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00028","url":null,"abstract":"Augmented reality devices are opening up a new design space for immersive visualizations. 3D spatial content can be overlaid onto existing physical visualizations for new insights into the data. We present Coordinate, a collaborative analysis tool for augmented reality visualizations of map-based data designed for mobile devices. Coordinate leverages a spreadsheet-programmable web interface paired with a contemporary augmented reality infrastructure for an easy-to-use tool that can provide spatial information to multiple users. It also offers an immersive visualization experience that seeks to enrich presentations for business, educational, and scientific discussions.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126146285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Santamaría-Pang, James R. Kubricht, Chinmaya Devaraj, Aritra Chowdhury, P. Tu
{"title":"Towards Semantic Action Analysis via Emergent Language","authors":"A. Santamaría-Pang, James R. Kubricht, Chinmaya Devaraj, Aritra Chowdhury, P. Tu","doi":"10.1109/AIVR46125.2019.00047","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00047","url":null,"abstract":"Recent work on unsupervised learning has explored the feasibility of semantic analysis and interpretation via Emergent Language (EL) models. As EL requires some form of numerical embedding, it remains unclear which type is required in order for the EL to properly capture certain semantic concepts associated with a given task. In this paper, we compare different approaches that can be used to generate such embeddings: unsupervised and supervised. We start by producing a large dataset using a single-agent simulation environment. In these experiments, a purpose-driven agent attempts to accomplish a number of tasks. These tasks are performed in a synthetic cityscape environment, which includes houses, banks, theaters, and restaurants. Given such experiences, specification of the associated goal structure constitutes a narrative. We investigate the feasibility of producing an EL from raw pixel data with the hope that resulting descriptions can be used to infer the underlying narrative structure. Our initial experiments show that a supervised learning approach yields embeddings and EL descriptions that capture narrative structure. Alternatively, an unsupervised learning approach results in greater ambiguity with respect to the narrative.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123194652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Luxenburger, Jonas Mohr, Torsten Spieldenner, Dieter Merkel, Fabio Espinosa, Tim Schwartz, Florian Reinicke, Julian Ahlers, Markus Stoyke
{"title":"Augmented Reality for Human-Robot Cooperation in Aircraft Assembly","authors":"A. Luxenburger, Jonas Mohr, Torsten Spieldenner, Dieter Merkel, Fabio Espinosa, Tim Schwartz, Florian Reinicke, Julian Ahlers, Markus Stoyke","doi":"10.1109/AIVR46125.2019.00061","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00061","url":null,"abstract":"Augmented Reality (AR) is often discussed as one of the enabling technologies in Industrie 4.0. In this paper, we describe a practical application, where Augmented Reality glasses are used not only for assembly assistance, but also as a means of communication to enable the orchestration of a hybrid team consisting of a human worker and two mobile robotic systems. The task of the hybrid team is to rivet so-called stringers onto an aircraft hull. While the two robots do the physically demanding, unergonomic and possibly hazardous tasks (squeezing and sealing rivets), the human takes over those responsibilities that need experience, multi-sensory sensitiveness and specialist knowledge. We describe the working scenario, the overall architecture and give design and implementation details on the AR application.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115042622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from Program Co-Chairs","authors":"G. Theodoropoulos","doi":"10.1109/sec.2018.00006","DOIUrl":"https://doi.org/10.1109/sec.2018.00006","url":null,"abstract":"On behalf of the 20th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2014) Organizing Committee, we are very pleased to announce that more than three hundred researchers and contributors from the world submitted their papers to share their research results and new ideas. The objective of this conference to provide a major international forum for scientists, engineers, and users to exchange and share their experiences, new ideas, and latest research results on all aspects of parallel and distributed computing systems.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126495114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}