{"title":"ISMAR 2021 Sponsors and Partners","authors":"","doi":"10.1109/ismar52148.2021.00013","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00013","url":null,"abstract":"","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124888663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ISMAR 2021 Organizing Committee","authors":"","doi":"10.1109/ismar52148.2021.00007","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00007","url":null,"abstract":"","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124957967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yijun Li, Miao Wang, Frank Steinicke, Qinping Zhao
{"title":"OpenRDW: A Redirected Walking Library and Benchmark with Multi-User, Learning-based Functionalities and State-of-the-art Algorithms","authors":"Yijun Li, Miao Wang, Frank Steinicke, Qinping Zhao","doi":"10.1109/ismar52148.2021.00016","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00016","url":null,"abstract":"Redirected walking (RDW) is a locomotion technique that guides users on virtual paths, which might vary from the paths they physically walk in the real world. Thereby, RDW enables users to explore a virtual space that is larger than the physical counterpart with near-natural walking experiences. Several approaches have been proposed and developed; each using individual platforms and evaluated on a custom dataset, making it challenging to compare between methods. However, there are seldom public toolkits and recognized benchmarks in this field. In this paper, we introduce OpenRDW, an open-source library and benchmark for developing, deploying and evaluating a variety of methods for walking path redirection. The OpenRDW library provides application program interfaces to access the attributes of scenes, to customize the RDW controllers, to simulate and visualize the navigation process, to export multiple formats of the results, and to evaluate RDW techniques. It also supports the deployment of multi-user real walking, as well as reinforcement learning-based models exported from TensorFlow or PyTorch. The OpenRDW benchmark includes multiple testing conditions, such as walking in size varied tracking spaces or shape varied tracking spaces with obstacles, multiple user walking, etc. On the other hand, procedurally generated paths and walking paths collected from user experiments are provided for a comprehensive evaluation. It also contains several classic and state-of-the-art RDW techniques, which include the above mentioned functionalities.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"255 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115771260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Ng, Daniel Medeiros, Mark Mcgill, J. Williamson, S. Brewster
{"title":"The Passenger Experience of Mixed Reality Virtual Display Layouts in Airplane Environments","authors":"Alexander Ng, Daniel Medeiros, Mark Mcgill, J. Williamson, S. Brewster","doi":"10.1109/ismar52148.2021.00042","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00042","url":null,"abstract":"Augmented / Mixed Reality headsets will in-time see adoption and use in a variety of mobility and transit contexts, allowing users to view and interact with virtual content and displays for productivity and entertainment. However, little is known regarding how multi-display virtual workspaces should be presented in a transit context, nor to what extent the unique affordances of transit environments (e.g. the social presence of others) might influence passenger perception of virtual display layouts. Using a simulated VR passenger airplane environment, we evaluated three different AR-driven virtual display configurations (Horizontal, Vertical, and Focus main display with smaller secondary windows) at two different depths, exploring their usability, user preferences, and the underlying factors that influenced those preferences. We found that the perception of invading other’s personal space significantly influenced preferred layouts in transit contexts. Based on our findings, we reflect on the unique challenges posed by passenger contexts, provide recommendations regarding virtual display layout in the confined airplane environment, and expand on the significant benefits that AR offers over physical displays in said environments.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131584290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Wieland, Johannes Zagermann, Jens Müller, Harald Reiterer
{"title":"Separation, Composition, or Hybrid? – Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality","authors":"Jonathan Wieland, Johannes Zagermann, Jens Müller, Harald Reiterer","doi":"10.1109/ismar52148.2021.00057","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00057","url":null,"abstract":"Augmented Reality (AR) supported collaboration is a popular topic in HCI research. Previous work has shown the benefits of collaborative 3D object manipulation and identified two possibilities: Either separate or compose users’ inputs. However, their experimental comparison using handheld AR displays is still missing. We, therefore, conducted an experiment in which we tasked 24 dyads with collaboratively positioning virtual objects in handheld AR using three manipulation techniques: 1) Separation – performing only different manipulation tasks (i. e., translation or rotation) simultaneously, 2) Composition – performing only the same manipulation tasks simultaneously and combining individual inputs using a merge policy, and 3) Hybrid – performing any manipulation tasks simultaneously, enabling dynamic transitions between Separation and Composition. While all techniques were similarly effective, Composition was least efficient, with higher subjective workload and worse user experience. Preferences were polarized between clear work division (Separation) and freedom of action (Hybrid). Based on our findings, we offer research and design implications.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115423657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection-Guided 3D Hand Tracking for Mobile AR Applications","authors":"Yunlong Che, Yue Qi","doi":"10.1109/ismar52148.2021.00055","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00055","url":null,"abstract":"Interaction using bare hands is experiencing a growing interest in mobile-based Augmented Reality (AR). Existing RGB-based works fail to provide a practical solution to identifying rich details of the hand. In this paper, we present a detection-guided method capable of recovery 3D hand posture with a color camera. The proposed method consists of key-point detectors and 3D pose optimizer. The detectors first locate the 2D hand bounding box and then apply a lightweight network on the hand region to provide a pixel-wise like-hood of hand joints. The optimizer lifts the 3D pose from the estimated 2D joints in a model-fitting manner. To ensure the result plausibly, we encode the hand shape into the objective function. The estimated 3D posture allows flexible hand-to-mobile interaction in AR applications. We extensively evaluate the proposed approach on several challenging public datasets. The experimental results indicate the efficiency and effectiveness of the proposed method.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117211277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cybersickness Prediction from Integrated HMD’s Sensors: A Multimodal Deep Fusion Approach using Eye-tracking and Head-tracking Data","authors":"Rifatul Islam, Kevin Desai, J. Quarles","doi":"10.1109/ismar52148.2021.00017","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00017","url":null,"abstract":"Cybersickness prediction is one of the significant research challenges for real-time cybersickness reduction. Researchers have proposed different approaches for predicting cybersickness from bio-physiological data (e.g., heart rate, breathing rate, electroencephalogram). However, collecting bio-physiological data often requires external sensors, limiting locomotion and 3D-object manipulation during the virtual reality (VR) experience. Limited research has been done to predict cybersickness from the data readily available from the integrated sensors in head-mounted displays (HMDs) (e.g., head-tracking, eye-tracking, motion features), allowing free locomotion and 3D-object manipulation. This research proposes a novel deep fusion network to predict cybersickness severity from heterogeneous data readily available from the integrated HMD sensors. We extracted 1755 stereoscopic videos, eye-tracking, and head-tracking data along with the corresponding self-reported cybersickness severity collected from 30 participants during their VR gameplay. We applied several deep fusion approaches with the heterogeneous data collected from the participants. Our results suggest that cybersickness can be predicted with an accuracy of 87.77% and a root-mean-square error of 0.51 when using only eye-tracking and head-tracking data. We concluded that eye-tracking and head-tracking data are well suited for a standalone cybersickness prediction framework.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128267875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulating Realistic Human Motion Trajectories of Mid-Air Gesture Typing","authors":"Junxiao Shen, John J. Dudley, P. Kristensson","doi":"10.1109/ismar52148.2021.00056","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00056","url":null,"abstract":"The eventual success of many AR and VR intelligent interactive systems relies on the ability to collect user motion data at large scale. Realistic simulation of human motion trajectories is a potential solution to this problem. Simulated user motion data can facilitate prototyping and speed up the design process. There are also potential benefits in augmenting training data for deep learning-based AR/VR applications to improve performance. However, the generation of realistic motion data is nontrivial. In this paper, we examine the specific challenge of simulating index finger movement data to inform mid-air gesture keyboard design. The mid-air gesture keyboard is deployed on an optical see-through display that allows the user to enter text by articulating word gesture patterns with their physical index finger in the vicinity of a visualized keyboard layout. We propose and compare four different approaches to simulating this type of motion data, including a Jerk-Minimization model, a Recurrent Neural Network (RNN)-based generative model, and a Generative Adversarial Network (GAN)-based model with two modes: style transfer and data alteration. We also introduce a procedure for validating the quality of the generated trajectories in terms of realism and diversity. The GAN-based model shows significant potential for generating synthetic motion trajectories to facilitate design and deep learning for advanced gesture keyboards deployed in AR and VR.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126636273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-hand Pose Estimation from the non-cropped RGB Image with Self-Attention Based Network","authors":"Zhoutao Sun, Yong Hu, Xukun Shen","doi":"10.1109/ismar52148.2021.00040","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00040","url":null,"abstract":"Estimating the pose of two hands is a crucial problem for many human-computer interaction applications. Since most of the existing works utilize cropped images to predict the hand pose, they require a hand detection stage before pose estimation or input cropped images directly. In this paper, we propose the first real-time one-stage method for pose estimation from a single RGB image without hand tracking. Combining the self-attention mechanism with convolutional layers, the network we proposed is able to predict the 2.5D hand joints coordinate while locating the two hands regions. And to reduce the extra memory and computational consumption caused by self-attention, we proposed a linear attention structure with a spatial reduction attention block called SRAN block. We demonstrate the effectiveness of each component in our network through the ablation study. And experiments on public datasets showed the competitive result with the state-of-the-art method.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133223387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the ISMAR 2021 General Chairs","authors":"","doi":"10.1109/ismar52148.2021.00005","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00005","url":null,"abstract":"","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115842410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}