{"title":"Effects of Behavioral and Anthropomorphic Realism on Social Influence with Virtual Humans in AR","authors":"Hanseul Jun, J. Bailenson","doi":"10.1109/ISMAR-Adjunct51615.2020.00026","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00026","url":null,"abstract":"While many applications in AR will display embodied agents in scenes, there is little research examining the social influence of these AR renderings. In this experiment, we manipulated the behavioral and anthropomorphic realism of an embodied agent. Participants wore an AR headset and walked a path specified by four virtual cubes, designed to bring them close to either humans or objects rendered in AR. In addition there was a control condition with no virtual objects in the room. Participants were then asked to choose between two physical chairs to sit on—one with a virtual human or object on it, or one without any. We examined the interpersonal distance between participants and rendered objects, physical seat choice, body rotation direction while choosing a seat, and social presence ratings. For interpersonal distance, there was an effect of anthropomorphic realism but not behavioral realism—participants left more space for human-shaped objects than for non-human objects, regardless of how real the human behaved. There were no significant differences in seat choice and rotation direction. Social presence ratings were higher for agents high in both behavioral and anthropomorphic realism than for other conditions. We discuss implications for the social influence theory [5] and for the design of AR systems.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126694230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EmnDash: M-sequence Dashed Markers on Vector-based Laser Projection for Robust High-speed Spatial Tracking","authors":"Ryota Nishizono, Tomohiro Sueishi, M. Ishikawa","doi":"10.1109/ISMAR-Adjunct51615.2020.00058","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00058","url":null,"abstract":"Camera pose estimation is commonly used for augmented reality, and it is currently expected to be integrated into sports assistant technologies. However, conventional methods face difficulties in simultaneously achieving fast estimation in milliseconds or less for sports, bright lighting environments of the outdoors, and capturing of large activity areas. In this paper, we propose EmnDash, M-sequence dashed markers on vector-based laser projection for an asynchronous high-speed dynamic camera, which provides both a graphical information display for humans and markers for the wearable high-speed camera with a high S/N ratio from a distance. One of the main notions is drawing a vector projection image with a single stroke using two dashed lines as markers. The other involves embedding the binary M-sequence as the length of each dashed line and its recognition method using locality. The recognition of the M-sequence dashed line requires only a one-shot image, which increases the robustness of tracking both in terms of camera orientation and occlusion. We experimentally confirm an increase in recognizable posture, sufficient tracking accuracy, and low-computational cost in the evaluation of a static camera. We also show good tracking ability and demonstrate immediate recovery from occlusion in the evaluation of a dynamic camera.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129067585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the ISMAR 2020 General Chairs","authors":"","doi":"10.1109/ismar50242.2020.00005","DOIUrl":"https://doi.org/10.1109/ismar50242.2020.00005","url":null,"abstract":"","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115293732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Title Page i","authors":"","doi":"10.1109/ismar-adjunct51615.2020.00001","DOIUrl":"https://doi.org/10.1109/ismar-adjunct51615.2020.00001","url":null,"abstract":"","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114172154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Remote Assistance System in Augmented Reality for Early School Dropout Prevention","authors":"Marina-Anca Cidotã, D. Datcu","doi":"10.1109/ISMAR-Adjunct51615.2020.00091","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00091","url":null,"abstract":"The educational system suffers from early school dropout (i.e. graduation of the 8th grade at most), which is critical in Romania by its magnitude (16.4% in 2018). Unequal access to resources in education, the gap between rural and urban areas and the integration issue of the Roma population are challenges that result in deeper inequalities in society as a whole. Although many reforms have been applied and the budget for education kept increasing lately, Romania did not reach (or even get close to) any of the EU educational targets set for 2020 (i.e. 10% for early school dropout). The current situation of the education demands that more modern solutions should be discussed. In this paper, we propose an innovative technical system which facilitates remote learning experiences, especially for pupils with learning difficulties. We aim to explore the use of Augmented Reality (AR) to promote virtual co-location in education. That means, remote teachers can reach rather isolated pupils more easily and can smoothly engage in collaborative learning sessions. It is our belief that such an approach has the potential to reduce the risk of early school dropout.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114563714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Fiorentino, Elisa Maria Klose, Maria Lucia V. Alemanno, Isabella Giordano, Alessandro De Bellis, Ilaria Cavaliere, Dario Costantino, G. Fallacara, O. Straeter, Gabriele Sorrento
{"title":"User Study on Virtual Reality for Design Reviews in Architecture","authors":"M. Fiorentino, Elisa Maria Klose, Maria Lucia V. Alemanno, Isabella Giordano, Alessandro De Bellis, Ilaria Cavaliere, Dario Costantino, G. Fallacara, O. Straeter, Gabriele Sorrento","doi":"10.1109/ISMAR-Adjunct51615.2020.00079","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00079","url":null,"abstract":"Virtual reality is a candidate to become the preferred interface for architectural design review, but the effectiveness and usability of such systems is still an issue. We put together a multidisciplinary team to implement a test methodology and system to compare VR with 2D interaction, with a coherent test platform using Rhinoceros as industry-standard CAD software. A direct and valid comparison of the two setups is made possible by using the same software for both conditions. We designed and modeled three similar CAD models of a 2 two-story villa (1 for the training and 2 for the test) and we implanted 13 artificial errors, simulating common CAD issues. Users were asked to find the errors in a 10 minutes fixed-time session for each setup respectively. We completed our test with 10 students from the design and architecture faculty, with proven experience of the 2D version of the CAD. We did not find any significant differences between the two modalities in cognitive workload, but the user preference was clearly towards VR. The presented work may provide interesting insights for future human-centered studies and to improve future VR architectural applications.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127151138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Eichhorn, Adnane Jadid, D. A. Plecher, Sandro Weber, G. Klinker, Yuta Itoh
{"title":"Catching the Drone - A Tangible Augmented Reality Game in Superhuman Sports","authors":"Christian Eichhorn, Adnane Jadid, D. A. Plecher, Sandro Weber, G. Klinker, Yuta Itoh","doi":"10.1109/ISMAR-Adjunct51615.2020.00022","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00022","url":null,"abstract":"The newly defined genre of Superhuman Sports provides several challenges, including the need to develop rich Augmented Reality (AR) game concepts with tangible interactions and augmentation. In this paper, we provide insights into a Superhuman Sports ball game, where players are able to interact in mid-air, rapidly and precisely with a smart, augmented and catchable drone ball. We describe our core concepts and a path towards a fully functional system with multiple and potentially different display solutions, ranging from smartphone-based AR to, eventually, HMD-based AR. For this AR game idea, a unique drone with a trackable cage based on LED pattern recognition has been developed. The player, as well as the drone will move swiftly during the game. To precisely estimate the 6DoF pose of the fast-moving drone in this dynamic scenario, we propose a suitable pipeline with a tracking algorithm. As foundation for the tracking, LEDs have been placed in a specific, spherical pattern on the drone cage. Furthermore, refinements based on the unique attributes of LEDs are considered.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124675157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hangfan Liu, Yongzhi Su, J. Rambach, A. Pagani, D. Stricker
{"title":"TGA: Two-level Group Attention for Assembly State Detection","authors":"Hangfan Liu, Yongzhi Su, J. Rambach, A. Pagani, D. Stricker","doi":"10.1109/ISMAR-Adjunct51615.2020.00074","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00074","url":null,"abstract":"Assembly state detection, i.e., object state detection, has a critical meaning in computer vision tasks, especially in AR assisted assembly. Unlike other object detection problems, the visual difference between different object states can be subtle. For the better learning of such subtle appearance difference, we proposed a two-level group attention module (TGA), which consists of inter-group attention and intro-group attention. The relationship between feature groups as well as the representation within each feature group is simultaneously enhanced. We embedded the proposed TGA module in a popular object detector and evaluated it on two new datasets related to object state estimation. The result shows that our proposed attention module outperforms the baseline attention module.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128877361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengyuan Lai, Xinyu Hu, Ann Segismundo, Ananya Phadke, Ryan P. McMahan
{"title":"The Comfort Benefits of Gaze-Directed Steering","authors":"Chengyuan Lai, Xinyu Hu, Ann Segismundo, Ananya Phadke, Ryan P. McMahan","doi":"10.1109/ISMAR-Adjunct51615.2020.00040","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00040","url":null,"abstract":"Spatial steering is a common virtual reality (VR) travel metaphor that affords virtual locomotion and spatial understanding. Variations of spatial steering include Gaze-, Hand-, and Torso-directed steering. We present a study that employed a dual-task methodology to investigate the user performance characteristics and cognitive loads of the three spatial steering techniques, in addition to several subjective measures. Using the two one-sided tests (TOST) procedure for dependent means, we have found that Gaze- and Hand-directed steering were statistically equivalent for travel performance and cognitive load. However, we found that Gaze-directed steering induced significantly less simulator sickness than Hand-directed steering.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129538895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Evaluation of AR-Assisted Navigation for Search and Rescue in Underground Spaces","authors":"D. C. Demirkan, H. Düzgün","doi":"10.1109/ISMAR-Adjunct51615.2020.00017","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00017","url":null,"abstract":"In this study, we evaluated the performance of AR-assisted navigation in a real underground mine with good and limited illumination conditions as well as without the illumination considering possible search and rescue conditions. For this purpose, we utilized the Lumin SDK’s embedded spatial mapping algorithm for mapping and navigating. We used the spatial mapping algorithm to create the mesh model of the escape route and to render it with the user input into the Magic Leap One. Then we compared the spatial mapping algorithm in three different scenarios for the evacuation of an underground mine in an emergency situation. The escape route has two junctions and 30 meters (100 feet). The baseline scenarios are (i) evacuation of the mine in a fully illuminated condition, (ii) evacuation with the headlamp and (iii) without any illumination. In the first scenario (fully illuminated route with the rendered meshes) the evacuation took 40 seconds. In the second scenario (illumination with the headlamp), the evacuation took 44 seconds. For the last scenario (no light source and hence in total darkness) the evacuation took 54 seconds. We found that AR-assisted navigation is effective for supporting search and rescue efforts in high attrition conditions of underground space.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133805559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}