Yasmeen Abdrabou, S. Rivu, Tarek Ammar, Jonathan Liebers, Alia Saad, C. Liebers, Uwe Gruenefeld, Pascal Knierim, M. Khamis, Ville Makela, Stefan Schneegass, Florian Alt
{"title":"Understanding Shoulder Surfer Behavior and Attack Patterns Using Virtual Reality","authors":"Yasmeen Abdrabou, S. Rivu, Tarek Ammar, Jonathan Liebers, Alia Saad, C. Liebers, Uwe Gruenefeld, Pascal Knierim, M. Khamis, Ville Makela, Stefan Schneegass, Florian Alt","doi":"10.1145/3531073.3531106","DOIUrl":"https://doi.org/10.1145/3531073.3531106","url":null,"abstract":"In this work, we explore attacker behavior during shoulder surfing. As such behavior is often opportunistic and difficult to observe in real world settings, we leverage the capabilities of virtual reality (VR). We recruited 24 participants and observed their behavior in two virtual waiting scenarios: at a bus stop and in an open office space. In both scenarios, participants shoulder surfed private screens displaying different types of content. From the results we derive an understanding of factors influencing shoulder surfing behavior, reveal common attack patterns, and sketch a behavioral shoulder surfing model. Our work suggests directions for future research on shoulder surfing and can serve as a basis for creating novel approaches to mitigate shoulder surfing.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121424909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. M. Roffarello, Luigi De Russis, R. Schwartz, P. Apostolellis
{"title":"Designing for Meaningful Interactions and Digital Wellbeing","authors":"A. M. Roffarello, Luigi De Russis, R. Schwartz, P. Apostolellis","doi":"10.1145/3531073.3535255","DOIUrl":"https://doi.org/10.1145/3531073.3535255","url":null,"abstract":"In the contemporary attention economy, tech companies design the interfaces of their digital platforms by adopting attention-capture dark patterns to drive their behavior and maximize time spent and daily visits. Two popular examples are viral recommendations and content autoplay on social networks. As these patterns exploit people’s psychological vulnerabilities and may contribute to technology overuse and problematic behaviors, there is the need of promoting the design of technology that better align with people’s digital wellbeing. This workshop seeks to advance this timely and urgent need, by inviting researchers and practitioners in interdisciplinary domains to engage in conversation around the design of interfaces that allow people to take advantage of digital platforms in a meaningful and conscious way.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"39 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120845199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zahra Aminolroaya, Wesley Willett, Samuel Wiebe, C. Josephson, F. Maurer
{"title":"Watch The Videos Whenever You Have Time: Asynchronously Involving Neurologists in VR Prototyping","authors":"Zahra Aminolroaya, Wesley Willett, Samuel Wiebe, C. Josephson, F. Maurer","doi":"10.1145/3531073.3531181","DOIUrl":"https://doi.org/10.1145/3531073.3531181","url":null,"abstract":"We present a video-based approach for collecting feedback on virtual reality (VR) prototypes. While developing a high-fidelity VR prototype to help neurologists analyze seizure propagation information for brain surgery planning, our neurologist collaborators’ limited availability reduced opportunities for them to give feedback on critical design decisions. In response, we developed a remote feedback process in which developers created videos of the VR design process and used these to ground iterative input from neurologist collaborators. We describe our approach and detail opportunities and challenges for video-based feedback to play a role in future VR prototyping.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133532547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Applications of dynamic hypergraph visualization","authors":"P. Buono, Paola Valdivia","doi":"10.1145/3531073.3534495","DOIUrl":"https://doi.org/10.1145/3531073.3534495","url":null,"abstract":"We present a set of applications of dynamic hypergraph visualization. Dynamic hypergraphs can be used to represent connections between two or more entities that occur in time intervals. Visualizing dynamic hypergraphs can help analyzing the evolving connections in groups of entities. We report different domains where the data can be modeled as a hypergraph and some patterns that can be identified in the specific domains.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129204362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implicit Interaction Approach for Car-related Tasks On Smartphone Applications","authors":"Alba Bisante, Emanuele Panizzi, Stefano Zeppieri","doi":"10.1145/3531073.3531173","DOIUrl":"https://doi.org/10.1145/3531073.3531173","url":null,"abstract":"This work proposes an implicit interaction approach to ease implementing basic car-related tasks on a smartphone application. Many car drivers use apps on their smartphones to get support in typical tasks related to car usage, yet some of the available apps have a poor user experience because they require the user’s attention, causing a distraction while driving. In addition, they often rely on users inputting relevant data repetitively. Implicit interaction is a possible solution to improve the user experience of car-related interfaces. Basic user tasks for many car applications are (i) reporting parking the car in a specific position, (ii) declaring that the user will soon free a parking spot, and (iii) that a new trip with the car has begun (thus, that a parking spot became free). The proposed context-aware interaction approach to executing these tasks is described together with its implementation in an application that leverages the smartphone’s sensing capability of users’ locations and motion activities and merges them to infer parking and unparking events.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131694045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Volumetric Stories Across Immersive Displays","authors":"Krzysztof Pietroszek, M. Rebol, Liudmila Tahai","doi":"10.1145/3531073.3534498","DOIUrl":"https://doi.org/10.1145/3531073.3534498","url":null,"abstract":"We describe three interactive augmented reality stories for children that we showed at the Cannes Film Festival “Marche du Film” in July 2021. The stories were developed using a novel technique: 3D modeling and animation from within the Virtual Reality. The audience at Cannes viewed and interacted with these stories using mixed reality glasses, a prototype of the Sony Spatial Display, and an AR-enabled tablet. We report on the technical development process and the feedback from the Cannes audience.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"333 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122980584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impending Success or Failure? An Investigation of Gaze-Based User Predictions During Interaction with Ontology Visualizations","authors":"Bo Fu, B. Steichen","doi":"10.1145/3531073.3531081","DOIUrl":"https://doi.org/10.1145/3531073.3531081","url":null,"abstract":"Designing and developing innovative visualizations to assist humans in the process of generating and understanding complex semantic data has become an important element in supporting effective human-ontology interaction, as visual cues are likely to provide clarity, promote insight, and amplify cognition. While recent research has indicated potential benefits of applying novel adaptive technologies, typical ontology visualization techniques have traditionally followed a one-size-fits-all approach that often ignores an individual user's preferences, abilities, and visual needs. In an effort to realize adaptive ontology visualization, this paper presents a potential solution to predict a user's likely success and failure in real time, and prior to task completion, by applying established machine learning models on eye gaze generated during an interactive session. These predictions are envisioned to inform future adaptive ontology visualizations that could potentially adjust its visual cues or recommend alternative visualizations in real time to improve individual user success. This paper presents findings from a series of experiments to demonstrate the feasibility of gaze-based success and failure predictions in real time that can be achieved with a number of off-the-shelf classifiers without the need of expert configurations in the presence of mixed user backgrounds and task domains across two commonly used fundamental ontology visualization techniques.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122124366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Ghiazzi, Stefano Riva, Mattia Gianotti, Pietro Crovari, F. Garzotto
{"title":"MagicMuseum: Team-based Experiences in Interactive Smart Spaces for Cultural Heritage Education","authors":"Simone Ghiazzi, Stefano Riva, Mattia Gianotti, Pietro Crovari, F. Garzotto","doi":"10.1145/3531073.3534488","DOIUrl":"https://doi.org/10.1145/3531073.3534488","url":null,"abstract":"MagicMuseum is a set of team-based, immersive, full-body activities for Cultural Heritage Education of primary school children. MagicMuseum exploits the interactive and multisensory capability of the Magic Room, an indoor smart space equipped with IoT-enriched components such as floor and wall projections, smart lighting, music and sound, motion and gesture sensors, and smart objects. The paper describes MagicMuseum and briefly reports an exploratory study involving 22 children at a local primary school.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"446 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117313327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"eXtended Reality and Passengers of the Future","authors":"S. Brewster","doi":"10.1145/3531073.3538399","DOIUrl":"https://doi.org/10.1145/3531073.3538399","url":null,"abstract":"I will present our work into improving passenger journeys using immersive Virtual and Augmented Reality (together XR) to support entertainment, work and collaboration on the move. In Europe, people travel an average of 12,000km per year on private and public transport, in cars, buses, planes and trains. These journeys are often repetitive and wasted time. This total will rise with the arrival of fully autonomous cars, which free drivers to become passengers. The potential to recover this lost time is impeded by 3 significant challenges: XR headsets could allow passengers to use their travel time in new, productive ways, but only if these fundamental challenges can be overcome. Passengers would be able to use large virtual displays for productivity; escape the physical confines of the vehicle and become immersed in virtual experiences; and communicate with distant others through new embodied forms of communication. I will discuss our solutions to these challenges, focusing on the visual aspects. We are: developing new interaction techniques for VR and AR that can work in confined, seated spaces; supporting safe, socially acceptable use of XR providing awareness of others and the travel environment; and overcoming motion sickness using multimodal countermeasures to support these novel immersive experiences.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115588980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingfeng Wu, Yixian Li, Yingying She, Fang Liu, Yan Luo, Xinyu Yang
{"title":"Bridging Virtual and Reality in Mobile Augmented Reality Applications to Promote Immersive Experience","authors":"Qingfeng Wu, Yixian Li, Yingying She, Fang Liu, Yan Luo, Xinyu Yang","doi":"10.1145/3531073.3531122","DOIUrl":"https://doi.org/10.1145/3531073.3531122","url":null,"abstract":"Immersion is a powerful and important interactive experience. However, little is known about how we can facilitate immersion in Mobile Augmented Reality (MAR) applications. Establishing the relationship between the virtual and the real is considered a promising way to promote immersion. To enhance immersion in MAR, we present BRIDGE, an interaction design model which builds a bridge between virtual and reality through the following three kinds of relationships: The virtual object has a close relationship with the real environment where the user is (contextual relationship) ; the virtual object has the same physical properties as the real world (physical relationship) ; the user imitates real-world interactions by directly interacting with the virtual world with their hands (interactive relationship). To evaluate the effect of the BRIDGE model, we implement it into the application design and conduct a comparative study of 32 users, and explore the immersive user experience of contextual and non-contextual, physical and non-physical, natural-interaction and screen-touch. The quantitative and qualitative results show that virtual objects have a stronger presence and users are more immersed in the environment when there is a contextual and physical relationship and users can interact naturally. This study is the first step to having a better understanding of the characteristics that contribute to an immersive experience and how they affect human perception and the presence of virtual objects. We hope to provide design insights for MAR applications based on these results.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129611124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}