Priyanka Pazhayedath, Pedro Belchior, Rafael Prates, Filipe Silveira, D. Lopes, Robbe Cools, Augusto Esteves, A. Simeone
{"title":"Exploring Bi-Directional Pinpointing Techniques for Cross-Reality Collaboration","authors":"Priyanka Pazhayedath, Pedro Belchior, Rafael Prates, Filipe Silveira, D. Lopes, Robbe Cools, Augusto Esteves, A. Simeone","doi":"10.1109/VRW52623.2021.00055","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00055","url":null,"abstract":"Virtual Reality (VR) technology enables users to immerse themselves in artificial worlds. However, it isolates users from the outside world and impedes them from collaborating with other users who might be outside of the VR experience and vice-versa. We implemented two systems where we explore how such an external user in the real world can interact across realities with a user immersed in virtual reality, either locally or remotely, in order to to share pinpoint locations. In the first we investigate three cross-reality techniques for the external user to draw the attention of their VR counterpart on specific objects present in the virtual environment (Voice, Highlight, and Arrow). Participants performed better overall and preferred the Arrow technique, followed by the Highlight technique. In the second system we expand on these two techniques to explore an even starker cross-reality interaction between users in VR and users interacting via a tablet computer to direct each other to pinpoint objects in the scene. We adapted the previous two techniques and implemented two others (Vision cone, Pointing) that support bi-directional communication between users. When it comes to bi-directional pinpointing, VR users still showed preference for the Arrow technique (now described as Pointing in Giant mode), while mobile users were split between the Vision cone and the Highlight techniques.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134624312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Oberdörfer, Samantha Straka, Marc Erich Latoschik
{"title":"Effects of Immersion and Visual Angle on Brand Placement Effectiveness","authors":"S. Oberdörfer, Samantha Straka, Marc Erich Latoschik","doi":"10.1109/VRW52623.2021.00102","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00102","url":null,"abstract":"Typical inherent properties of immersive Virtual Reality (VR) such as felt presence might have an impact on how well brand placements are remembered. In this study, we exposed participants to brand placements in four conditions of varying degrees of immersion and visual angle on the stimulus. Placements appeared either as poster or as puzzle. We measured the recall and recognition of these placements. Our study revealed that neither immersion nor the visual angle had a significant impact on memory for brand placements.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115751771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nan Sun, Annette C. Feng, Ryan Patton, Y. Gingold, W. Lages
{"title":"Programmable Virtual Reality Environments","authors":"Nan Sun, Annette C. Feng, Ryan Patton, Y. Gingold, W. Lages","doi":"10.1109/VRW52623.2021.00192","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00192","url":null,"abstract":"We present a programmable virtual environment that allows users to create and manipulate 3D objects via code while inside virtual reality. Our environment supports the control of 3D transforms, physical, and visual properties. Programming is done by means of a custom visual block-language that is translated into Lua language scripts. We believed that the direction of this project will benefit computer science education in helping students to learn programming and spatial thinking more efficiently.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125694535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Murnane, Padraig Higgins, Monali Saraf, Francis Ferraro, Cynthia Matuszek, Don Engel
{"title":"A Simulator for Human-Robot Interaction in Virtual Reality","authors":"Mark Murnane, Padraig Higgins, Monali Saraf, Francis Ferraro, Cynthia Matuszek, Don Engel","doi":"10.1109/VRW52623.2021.00117","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00117","url":null,"abstract":"We present a suite of tools to model a robot, its sensors, and the surrounding environment in VR, with the goal of collecting training data for real-world robots. The virtual robot observes a rigged avatar created in our photogrammetry facility and embodying a VR user. We are particularly interested in verbal human/robot interactions, which can be combined with the robot’s sensor data for grounded language learning. Because virtual scenes, tasks, and robots are easily reconfigured compared to their physical analogs, our approach proves extremely versatile in preparing a wide range of robot scenarios for an array of use cases.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134291718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A pipeline for facial animations on low budget VR productions","authors":"Huoston Rodrigues Batista","doi":"10.1109/VRW52623.2021.00090","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00090","url":null,"abstract":"This work presents a new approach to developing character facial animations for Virtual Reality projects focusing on low budget productions. Facial animations are mostly time-consuming and complex and often involve expensive equipment and software inaccessible to the vast majority of professionals. This work aims to present an alternative facial animation pipeline using a hybrid approach and combining traditional software, algorithms, and processes to offer alternatives for producing high-quality facial animations for characters focusing on Virtual Reality applications.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133916286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Revisiting the Scene-Graph-as-Bus Concept: Inter-networking Heterogeneous Applications Using glTF Fragments","authors":"Jaspreet Singh Dhanjan, A. Steed","doi":"10.1109/VRW52623.2021.00068","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00068","url":null,"abstract":"While there are now many examples of successful collaborative mixed reality applications, each application uses its own custom networking framework and applications rarely inter-operate. To enable much larger-scale distributed systems, we will need inter-networking protocols that allow heterogeneous applications to exchange data. We demonstrate a proof of concept implementation that revisits the concept of using a scene-graph as a bus. That is, sharing low-level geometry and rendering information, rather than high-level semantic events. Our networking protocol uses glTF fragments and edits to express scene changes. We use the proof of concept to explore the potential to inter-network very different applications that are based on different underlying graphics engine technology.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132144239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Junker, Carl Hutters, Daniel Reipur, Lasse Embøl, Ali Adjorlu, R. Nordahl, Stefania Serafin, Daniel Thaysen Petersen, A. Fink-Jensen
{"title":"Fighting Alcohol Craving Using Virtual Reality: the Role of Social Interaction","authors":"Andreas Junker, Carl Hutters, Daniel Reipur, Lasse Embøl, Ali Adjorlu, R. Nordahl, Stefania Serafin, Daniel Thaysen Petersen, A. Fink-Jensen","doi":"10.1109/VRW52623.2021.00054","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00054","url":null,"abstract":"Craving is a cause of relapse in patients suffering from a substance use disorder. Cue-exposure therapy builds on eliciting feelings of craving in patients in safe and controlled environments to condition them to control these feelings. Different efficient and resource-friendly methods of eliciting craving exist, (such as written material, still pictures, etc.). However, these methods create knowledge and skill transfer gaps between therapy sessions and real life scenarios. Virtual reality allows more true-to-life experiences, and research demonstrates its capabilities in eliciting craving in patients. Studies have identified different environments that elicits craving, suggesting bars to be one of the most effective ones. Research also suggests the presence of others to be an effective method of eliciting craving in users. However, the effect of social interaction has not yet been explored. Therefore, this paper presents a virtual bar with the purpose to investigate whether social interaction affects alcohol craving in users. The VR intervention is designed with close cooperation with a psychiatrist experienced in working with individuals suffering from alcohol use disorder. In this paper, we present the designed and developed VR intervention and discuss how an experiment can be conducted after the COVID-19 shutdowns.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132212231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimitar Valkov, S. Thiele, Karim Huesmann, E. Gebauer, B. Risse
{"title":"Touch Recognition on Complex 3D Printed Surfaces using Filter Response Analysis","authors":"Dimitar Valkov, S. Thiele, Karim Huesmann, E. Gebauer, B. Risse","doi":"10.1109/VRW52623.2021.00043","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00043","url":null,"abstract":"Touch sensing on various surfaces has played a prominent role in human-computer interaction in the last decades. However, current technologies are mostly suited for flat or sufficiently smooth surfaces and touch sensing on complex geometries remains a challenging task, especially when the sensing hardware needs to be embedded into the interactive object. In this paper, we introduce a novel sensing approach based on the observation that conductive materials and the user’s hand or finger can be considered a complex filter system with well-conditioned input-output relationships. Different hand postures can be disambiguated by mapping the response of these filters using an intentionally small convolutional neural network. Our experiments show that even straight-forward electrode geometries provided by common 3D printers and filaments can be used to achieve high accuracy, rendering expressive interactions with complex 3D shapes possible while allowing to integrate the touch surface directly into the interactive object. Ultimately, our low-cost and versatile sensing approach enables rich interaction on a variety of objects and surfaces which is demonstrated through a series of illustrative experiments.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132817048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MyChanges: Tools for the co-designing of housing transformations","authors":"S. Eloy, Micaela Raposo, F. Costa, P. Vermaas","doi":"10.1109/VRW52623.2021.00265","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00265","url":null,"abstract":"MyChanges is a prototype tool for generating and visualizing architectonic modifications of existing housing in co-design projects with inhabitants. Our hypothesis is that giving inhabitants design solutions that would fit individual needs and aspirations, will increase their satisfaction with their house. To arrive at architectonically responsible house transformations, we used a shape grammar system for defining the possible modifications [1]. For empowering inhabitants to understand and explore these modifications to their housing and increase the potential of their participation [2], we developed a mockup tool that comprehends two main parts: shape generation and visualization. The shape generation component is currently a mockup simulation that reproduces some of the generation possibilities of the grammar. For visualizing the outcomes, we developed three different possibilities: i) a semi-immersive visualization where the user utilizes a smart phone to see a 360º render of the site; ii) a fully immersive visualization developed with Unity in which the user with a Head Mounted Display, can freely navigate through the final design; iii) a non-immersive screen-based visualization, where the user, with a tablet device, visualizes a static image of the final design. Interviews and tests with real inhabitants (n=12) were performed to assess user’s response to the potential of the tools and preliminary conclusions show that a tool like MyChanges would have acceptance among inhabitants [3].","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121109088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[DC] Embodying an avatar with an asymmetrical lower body to modulate the dynamic characteristics of gait initiation","authors":"Valentin Vallageas, R. Aissaoui, David R. Labbé","doi":"10.1109/VRW52623.2021.00245","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00245","url":null,"abstract":"Virtual reality (VR) enables the user to perceive body owner ship towards a virtual body. This illusion is induced through first-person perspective (1PP) and synchronous movement with the real body. Previous studies have shown that pronounced differences between the real and the virtual body lead to changes in the user’s behavior. It has also been shown that modifying the body image can affect the user’s movements. Nevertheless, the state of the art does not refer to the kinetic and kinematic impacts of one virtual lower limb deformation. Therefore, this paper presents a methodology exploring the impact of a self-avatar with an asymmetrical lower body (one limb longer or larger than the other) on the dynamic characteristics of the user during a gait initiation task.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117050891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}