{"title":"Exploring Emotion Brushes for a Virtual Reality Painting Tool","authors":"Jungah Son, Misha Sra","doi":"10.1145/3489849.3489925","DOIUrl":"https://doi.org/10.1145/3489849.3489925","url":null,"abstract":"We present emoPaint, a virtual reality application that allows users to create paintings with expressive emotion-based brushes and shapes. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by allowing users to use brushes corresponding to specific emotions or to create their own emotion brushes and paint with the corresponding visual elements. Our system provides a variety of line textures, shape representations and color palettes for each emotion to enable users to control expression of emotions in their paintings. In this work we describe our implementation and illustrate paintings created using emoPaint.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130830430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PneuMod: A Modular Haptic Device with Localized Pressure and Thermal Feedback","authors":"Bowen Zhang, Misha Sra","doi":"10.1145/3489849.3489857","DOIUrl":"https://doi.org/10.1145/3489849.3489857","url":null,"abstract":"Humans have tactile sensory organs distributed all over the body. However, haptic devices are often only created for one part (e.g., hands, wrist, or face). We propose PneuMod, a wearable modular haptic device that can simultaneously and independently present pressure and thermal (warm and cold) cues to different parts of the body. The module in PneuMod is a pneumatically-actuated silicone bubble with an integrated Peltier device that can render thermo-pneumatic feedback through shapes, locations, patterns, and motion effects. The modules can be arranged with varying resolutions on fabric to create sleeves, headbands, leg wraps, and other forms that can be worn on multiple parts of the body. In this paper, we describe the system design, the module implementation, and applications for social touch interactions and in-game thermal and pressure feedback.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"2019 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129207771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PAIR: Phone as an Augmented Immersive Reality Controller","authors":"Arda Ege Unlu, R. Xiao","doi":"10.1145/3489849.3489878","DOIUrl":"https://doi.org/10.1145/3489849.3489878","url":null,"abstract":"Immersive head-mounted augmented reality allows users to overlay 3D digital content on a user’s view of the world. Current-generation devices primarily support interaction modalities such as gesture, gaze and voice, which are readily available to most users yet lack precision and tactility, rendering them fatiguing for extended interactions. We propose using smartphones, which are also readily available, as companion devices complementing existing AR interaction modalities. We leverage user familiarity with smartphone interactions, coupled with their support for precise, tactile touch input, to unlock a broad range of interaction techniques and applications - for instance, turning the phone into an interior design palette, touch-enabled catapult or AR-rendered sword. We describe a prototype implementation of our interaction techniques using an off-the-shelf AR headset and smartphone, demonstrate applications, and report on the results of a positional accuracy study.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127621919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ellipses Ring Marker for High-speed Finger Tracking","authors":"Tomohiro Sueishi, M. Ishikawa","doi":"10.1145/3489849.3489856","DOIUrl":"https://doi.org/10.1145/3489849.3489856","url":null,"abstract":"High-speed finger tracking is necessary for augmented reality and operation in human-machine cooperation without latency discomfort, but conventional markerless finger tracking methods are not fast enough and the marker-based methods have low wearability. In this paper, we propose an ellipses ring marker (ERM), a finger-ring marker consisting of multiple ellipses and its high-speed image recognition algorithm. The finger-ring shape has highly wearing continuity, and the surface shape is suitable for various viewing angle observation. The invariance of the ellipse in the perspective projection enables accurate and low-latency posture estimation. We have experimentally investigated the advantage in normal distribution, validated the sufficient accuracy and computational cost in the marker tracking, and showed a demonstration of dynamic projection mapping on a palm.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130134233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreea-Dalia Blaga, Maite Frutos Pascual, C. Creed, Ian Williams
{"title":"Virtual Object Categorisation Methods: Towards a Richer Understanding of Object Grasping for Virtual Reality","authors":"Andreea-Dalia Blaga, Maite Frutos Pascual, C. Creed, Ian Williams","doi":"10.1145/3489849.3489875","DOIUrl":"https://doi.org/10.1145/3489849.3489875","url":null,"abstract":"Object categorisation methods have been historically used in literature for understanding and collecting real objects together into meaningful groups and can be used to define human interaction patterns (i. e grasping). When investigating grasping patterns for Virtual Reality (VR), researchers used Zingg’s methodology which categorises objects based on shape and form. However, this methodology is limited and does not take into consideration other object attributes that might influence grasping interaction in VR. To address this, our work presents a study into three categorisation methods for virtual objects. We employ Zingg’s object categorisation as a benchmark against existing real and virtual object interaction work and introduce two new categorisation methods that focus on virtual object equilibrium and virtual object component parts. We evaluate these categorisation methods using a dataset of 1872 grasps from a VR docking task on 16 virtual representations of real objects and report findings on grasp patterns. We report on findings for each virtual object categorisation method showing differences in terms of grasp classes, grasp type and aperture. We conclude by detailing recommendations and future ideas on how these categorisation methods can be taken forward to inform a richer understanding of grasping in VR.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134056245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pauline Bimberg, Tim Weissker, Alexander Kulik, Bernd Froehlich
{"title":"Virtual Rotations for Maneuvering in Immersive Virtual Environments","authors":"Pauline Bimberg, Tim Weissker, Alexander Kulik, Bernd Froehlich","doi":"10.1145/3489849.3489893","DOIUrl":"https://doi.org/10.1145/3489849.3489893","url":null,"abstract":"In virtual navigation, maneuvering around an object of interest is a common task which requires simultaneous changes in both rotation and translation. In this paper, we present Anchored Jumping, a teleportation technique for maneuvering that allows the explicit specification of a new viewing direction by selecting a point of interest as part of the target specification process. A first preliminary study showed that naïve Anchored Jumping can be improved by an automatic counter rotation that preserves the user’s relative orientation towards their point of interest. In our second, qualitative study, this extended technique was compared with two common approaches to specifying virtual rotations. Our results indicate that Anchored Jumping allows precise and comfortable maneuvering and is compatible with techniques that primarily support virtual exploration and search tasks. Equipped with a combination of such complementary techniques, seated users generally preferred virtual over physical rotations for indoor navigation.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122290625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial Augmented Reality Visibility and Line-of-Sight Cues for Building Design","authors":"James A. Walsh, James Baumeister, B. Thomas","doi":"10.1145/3489849.3489868","DOIUrl":"https://doi.org/10.1145/3489849.3489868","url":null,"abstract":"Despite the technological advances in building design, visualizing 3D building layouts can be especially difficult for novice and expert users alike, who must take into account design constraints including line-of-sight and visibility. Using CADwalk, a commercial building design tool that utilizes floor-facing projectors to show 1:1 scale building plans, this work presents and evaluates two floor-based visual cues for assisting with evaluating line-of-sight and visibility. Additionally, we examine the impact of using virtual cameras looking from the inside-out (from user’s location to objects of interest) and outside-in (looking from an object of interest’s location back towards the user). Results show that floor-based cues led to participants more correctly rating visibility, despite taking longer to complete the task. This is an effective tradeoff, given the final outcome (the building design) where accuracy is paramount.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127178651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Pilot Study Examining the Unexpected Vection Hypothesis of Cybersickness.","authors":"J. Teixeira, Sebastien Miellet, S. Palmisano","doi":"10.1145/3489849.3489895","DOIUrl":"https://doi.org/10.1145/3489849.3489895","url":null,"abstract":"The relationship between vection (illusory self-motion) and cybersickness is complex. This pilot study examined whether only unexpected vection provokes sickness during head-mounted display (HMD) based virtual reality (VR). 20 participants ran through the tutorial of Mission: ISS (an HMD VR app) until they experienced notable sickness (maximum exposure was 15 minutes). We found that: 1) cybersickness was positively related to vection strength; and 2) cybersickness appeared to be more likely to occur during unexpected vection. Given the implications of these findings, future studies should attempt to replicate them and confirm the unexpected vection hypothesis with larger sample sizes and rigorous experimental designs.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115763139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johan Winther Kristensen, Allan Schjørring, Alex Mikkelsen, Daniel Agerholm Johansen, H. Knoche
{"title":"Of Leaders and Directors: A visual model to describe and analyse persistent visual cues directing to single out-of view targets","authors":"Johan Winther Kristensen, Allan Schjørring, Alex Mikkelsen, Daniel Agerholm Johansen, H. Knoche","doi":"10.1145/3489849.3489953","DOIUrl":"https://doi.org/10.1145/3489849.3489953","url":null,"abstract":"Researchers have come up with many visual cues that can guide Virtual (VR) and Augmented Reality (AR) users to out of view objects. The paper provides a classification of cues and tasks and visual model to describe and analyse cues to support their design.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"285 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115009914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pressing a Button You Cannot See: Evaluating Visual Designs to Assist Persons with Low Vision through Augmented Reality","authors":"Florian Lang, Tonja Machulla","doi":"10.1145/3489849.3489873","DOIUrl":"https://doi.org/10.1145/3489849.3489873","url":null,"abstract":"Partial vision loss occurs in several medical conditions and affects persons of all ages. It compromises many daily activities, such as reading, cutting vegetables, or identifying and accurately pressing buttons, e.g., on ticket machines or ATMs. Touchscreen interfaces pose a particular challenge because they lack haptic feedback from interface elements and often require people with impaired vision to rely on others for help. We propose a smartglasses-based solution to utilize the user’s residual vision. Together with visually-impaired individuals, we designed assistive augmentations for touchscreen interfaces and evaluated their suitability to guide attention towards interface elements and to increase the accuracy of manual inputs. We show that augmentations improve interaction performance and decrease cognitive load, particularly for unfamiliar interface layouts.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129804998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}