Francesco Laera, M. Foglia, Alessandro Evangelista, A. Boccaccio, M. Gattullo, V. Manghisi, Joseph L. Gabbard, A. Uva, M. Fiorentino
{"title":"Towards Sailing supported by Augmented Reality: Motivation, Methodology and Perspectives","authors":"Francesco Laera, M. Foglia, Alessandro Evangelista, A. Boccaccio, M. Gattullo, V. Manghisi, Joseph L. Gabbard, A. Uva, M. Fiorentino","doi":"10.1109/ISMAR-Adjunct51615.2020.00076","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00076","url":null,"abstract":"Sailing is a multidisciplinary activity that requires years to master. Recently this sustainable sport is becoming even harder due to the increasing number of onboard sensors, automation, artificial intelligence, and the high performances obtainable with modern vessels and sail designs. Augmented Reality technology (AR) has the potential to assist sailors of all ages and experience level and improve confidence, accessibility, situation awareness, and safety. This work presents our ongoing research and methodology for developing AR assisted sailing. We started with the problem definition followed by a state of the art using a systematic review. Secondly, we elicited the main task and variables using an online questionnaire with experts. Third, we extracted the main variables and conceptualized some visual interfaces using 3 different approaches. As final phase, we designed and implemented a user test platform using a VR headset to simulate AR in different marine scenarios. For a real deployment, we witness the lack of available AR devices, so we are developing one specific headset dedicated to this task. We also envision the possible redesign of the entire boat as a consequence of the introduction of AR technology.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121056728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LSFB: A Low-cost and Scalable Framework for Building Large-Scale Localization Benchmark","authors":"Haomin Liu, Mingxuan Jiang, Zhuang Zhang, Xiaopeng Huang, Linsheng Zhao, Meng Hang, Youji Feng, H. Bao, Guofeng Zhang","doi":"10.1109/ISMAR-Adjunct51615.2020.00065","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00065","url":null,"abstract":"With the rapid development of mobile sensor, network infrastructure and cloud computing, the scale of AR application scenario is expanding from small or medium scale to large-scale environments. Localization in the large-scale environment is a critical demand for the AR applications. Most of the commonly used localization techniques require quite a number of data with groundtruth localization for algorithm benchmarking or model training. The existed groundtruth collection methods can only be used in the outdoors, or require quite expensive equipments or special deployments in the environment, thus are not scalable to large-scale environments or to massively produce a large amount of groundtruth data. In this work, we propose LSFB, a novel low-cost and scalable frame-work to build localization benchmark in large-scale environments with groundtruth poses. The key is to build an accurate HD map of the environment. For each visual-inertial sequence captured in it, the groundtruth poses are obtained by joint optimization taking both the HD map and visual-inertial constraints. The experiments demonstrate the obtained groundtruth poses are accurate enough for AR applications. We use the proposed method to collect a dataset of both mobile phones and AR glass exploring in large-scale environments, and will release the dataset as a new localization benchmark for AR.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125365380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhimin Wang, Huangyue Yu, Haofei Wang, Zongji Wang, Feng Lu
{"title":"Comparing Single-modal and Multimodal Interaction in an Augmented Reality System","authors":"Zhimin Wang, Huangyue Yu, Haofei Wang, Zongji Wang, Feng Lu","doi":"10.1109/ISMAR-Adjunct51615.2020.00052","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00052","url":null,"abstract":"Multimodal interaction is expected to offer better user experience in Augmented Reality (AR), and thus becomes a recent research focus. However, due to the lack of hardware-level support, most existing works only combine two modalities at a time, e.g., gesture and speech. Gaze-based interaction techniques have been explored for the screen-based application, but rarely been used in AR systemsy configurable augmented reality system. In this paper, we propose a multimodal interactive system that integrates gaze, gesture and speech in a flexibly configurable augmented reality system. Our lightweight head-mounted device supports accurate gaze tracking, hand gesture recognition and speech recognition simultaneously. More importantly, the system can be easily configured into different modality combinations to study the effects of different interaction techniques. We evaluated the system in the table lamps scenario, and compared the performance of different interaction techniques. The experimental results show that the Gaze+Gesture+Speech is superior in terms of performance.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124995816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mario Lorenz, Sebastian Knopp, Philipp Klimant, J. Quellmalz, H. Schlegel
{"title":"Concept for a Virtual Reality Robot Ground Simulator","authors":"Mario Lorenz, Sebastian Knopp, Philipp Klimant, J. Quellmalz, H. Schlegel","doi":"10.1109/ISMAR-Adjunct51615.2020.00024","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00024","url":null,"abstract":"For many VR applications where natural walking is necessary, the problem of a far smaller real movement space than in VR arises. Treadmills and redirected walking are established methods for this issue. However, both are limited to even surfaces and are unable to simulate different ground properties. Here a concept for a VR robot ground simulator is presented allowing to walk on steep ground or even staircase and which can simulate different undergrounds like sand, grass, or concrete. Starting from gait parameters, the technical requirements and implementation challenges for the realization of such a VR ground simulator are given.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115205777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-Time Detection of Simulator Sickness in Virtual Reality Games Based on Players' Psychophysiological Data during Gameplay","authors":"Jialin Wang, Hai-Ning Liang, D. Monteiro, Wenge Xu, Hao Chen, Qiwen Chen","doi":"10.1109/ISMAR-Adjunct51615.2020.00071","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00071","url":null,"abstract":"Virtual Reality (VR) technology has been proliferating in the last decade, especially in the last few years. However, Simulator Sickness (SS) still represents a significant problem for its wider adoption. Currently, the most common way to detect SS is using the Simulator Sickness Questionnaire (SSQ). SSQ is a subjective measurement and is inadequate for real-time applications such as VR games. This research aims to investigate how to use machine learning techniques to detect SS based on in-game characters’ and users' physiological data during gameplay in VR games. To achieve this, we designed an experiment to collect such data with three types of games. We trained a Long Short-Term Memory neural network with the dataset eye-tracking and character movement data to detect SS in real-time. Our results indicate that, in VR games, our model is an accurate and efficient way to detect SS in real-time.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132431809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomu Tahara, Takashi Seno, Gaku Narita, T. Ishikawa
{"title":"Retargetable AR: Context-aware Augmented Reality in Indoor Scenes based on 3D Scene Graph","authors":"Tomu Tahara, Takashi Seno, Gaku Narita, T. Ishikawa","doi":"10.1109/ISMAR-Adjunct51615.2020.00072","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00072","url":null,"abstract":"We present Retargetable AR—a novel AR framework that yields an AR experience that is aware of scene contexts set in various real environments, achieving natural interaction between the virtual and real worlds. We characterize scene contexts with relationships among objects in 3D space. A context assumed by an AR content and a context formed by a real environment where users experience AR are represented as abstract graph representations, i.e. scene graphs. From RGB-D streams, our framework generates a volumetric map in which geometric and semantic information of a scene are integrated. Moreover, using the semantic map, we abstract scene objects as oriented bounding boxes and estimate their orientations. Then our framework constructs, in an online fashion, a 3D scene graph characterizing the context of a real environment for AR. The correspondence between the constructed graph and an AR scene graph denoting the context of AR content provides a semantically registered content arrangement, which facilitates natural interaction between the virtual and real worlds. We performed extensive evaluations on our prototype system through quantitative evaluation of the performance of the oriented bounding box estimation, subjective evaluation of the AR content arrangement based on constructed 3D scene graphs, and an online AR demonstration.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123812119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ISMAR 2020 Conference Committee Members","authors":"H. Saito, H. Nagahara","doi":"10.1109/ismar.2018.00011","DOIUrl":"https://doi.org/10.1109/ismar.2018.00011","url":null,"abstract":"","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123647284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shimin Hu, Joseph Gabbard, Jens Grubert, G. Bruder, D. Cheng
{"title":"ISMAR 2020 Science and Technology Program Committee Members","authors":"Shimin Hu, Joseph Gabbard, Jens Grubert, G. Bruder, D. Cheng","doi":"10.1109/ismar.2018.00012","DOIUrl":"https://doi.org/10.1109/ismar.2018.00012","url":null,"abstract":"Europe Klen Čopič Pucihar, University of Primorska, Slovenia Ulrich Eck, Technische Universität München, Germany Michele Fiorentino, Politecnico di Bari, Italy Per Ola Kristensson, University of Cambridge, UK Guillaume Moreau, Université de Nantes, France Alain Pagani, Technical University of Darmstadt, Germany Marc Stamminger, Friedrich-Alexander-Universität, Germany Frank Steinicke, Universität Hamburg, Germany Ian Williams, Birmingham City University, UK","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129592641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Title Page iii","authors":"","doi":"10.1109/itc-asia.2018.00002","DOIUrl":"https://doi.org/10.1109/itc-asia.2018.00002","url":null,"abstract":"","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116284979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the Workshop and Tutorial Chairs","authors":"G. Bruder, M. Servieres, M. Sugimoto","doi":"10.1109/ISMAR-ADJUNCT.2017.8","DOIUrl":"https://doi.org/10.1109/ISMAR-ADJUNCT.2017.8","url":null,"abstract":"e are really pleased to introduce the workshops and tutorials of the 15th International Symposium on Mixed and Augmented Reality. This year, we selected 7 workshops and 3 tutorials, covering a broad range of AR & MR topics and engaging presentation format. New for this year, the previous ISMAR MASH’D conference track has been redesigned as workshop, exploring new horizons and new form in the context of Media, Arts, Social Sciences, Humanities and Design.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132270075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}