Proceedings of the 2022 International Conference on Advanced Visual Interfaces最新文献

筛选
英文 中文
On-site or Remote Working?: An Initial Solution on How COVID-19 Pandemic May Impact Augmented Reality Users 现场工作还是远程工作?: COVID-19大流行如何影响增强现实用户的初步解决方案
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534490
Yuchong Zhang, Adam Nowak, A. Romanowski, M. Fjeld
{"title":"On-site or Remote Working?: An Initial Solution on How COVID-19 Pandemic May Impact Augmented Reality Users","authors":"Yuchong Zhang, Adam Nowak, A. Romanowski, M. Fjeld","doi":"10.1145/3531073.3534490","DOIUrl":"https://doi.org/10.1145/3531073.3534490","url":null,"abstract":"As a cutting edge technique requiring high-precision equipment, augmented reality (AR) and its users are influenced by the ambient environment. With the tremendous effect brought by COVID-19 pandemic, most people have shifted from on-site working to remote working. In this study, we propose an initial solution to explore the impact of COVID-19 pandemic on AR users working in these two situations. We develop a prototype application facilitated with gamification process in which users are requested to play an AR game in headset both in on-site and remote working environments. This game, which is highly dependent on the ambient environment, enables people to memorize, distinguish, and place virtual objects when immersing themselves into different surroundings with distinct distractors. We envision to conduct more user studies investigating how COVID-19 affects AR users, which could lead to more in-depth studies in the future.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124244389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowledge on User-Defined Gestures RepliGES和GEStory:系统化和巩固用户定义手势知识的可视化工具
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531112
Bogdan-Florin Gheran, Santiago Villarreal-Narvaez, Radu-Daniel Vatavu, J. Vanderdonckt
{"title":"RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowledge on User-Defined Gestures","authors":"Bogdan-Florin Gheran, Santiago Villarreal-Narvaez, Radu-Daniel Vatavu, J. Vanderdonckt","doi":"10.1145/3531073.3531112","DOIUrl":"https://doi.org/10.1145/3531073.3531112","url":null,"abstract":"The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125979564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
CueVR: Studying the Usability of Cue-based Authentication for Virtual Reality 基于线索的虚拟现实认证可用性研究
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531092
Yomna Abdelrahman, Florian Mathis, Pascal Knierim, Axel Kettler, Florian Alt, M. Khamis
{"title":"CueVR: Studying the Usability of Cue-based Authentication for Virtual Reality","authors":"Yomna Abdelrahman, Florian Mathis, Pascal Knierim, Axel Kettler, Florian Alt, M. Khamis","doi":"10.1145/3531073.3531092","DOIUrl":"https://doi.org/10.1145/3531073.3531092","url":null,"abstract":"Existing virtual reality (VR) authentication schemes are either slow or prone to observation attacks. We propose CueVR, a cue-based authentication scheme that is resilient against observation attacks by design since vital cues are randomly generated and only visible to the user experiencing the VR environment. We investigate three different input modalities through an in-depth usability study (N=20) and show that while authentication using CueVR is slower than the less secure baseline, it is faster than existing observation resilient cue-based schemes and VR schemes (4.151 s – 7.025 s to enter a 4-digit PIN). Our results also indicate that using the controllers’ trackpad significantly outperforms input using mid-air gestures. We conclude by discussing how visual cues can enhance the security of VR authentication while maintaining high usability. Furthermore, we show how existing real-world authentication schemes combined with VR’s unique characteristics can advance future VR authentication procedures.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"56 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132025380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
IXCI:The Immersive eXperimenter Control Interface IXCI:沉浸式实验控制界面
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534489
Alejandro Rey Lopez, Andrea Bellucci, P. Díaz, I. Cuevas
{"title":"IXCI:The Immersive eXperimenter Control Interface","authors":"Alejandro Rey Lopez, Andrea Bellucci, P. Díaz, I. Cuevas","doi":"10.1145/3531073.3534489","DOIUrl":"https://doi.org/10.1145/3531073.3534489","url":null,"abstract":"Standalone Head-Mounted Displays open up new possibilities to study full-room, bodily interactions in immersive environments. User studies with these devices, however, are complex to design, setup and run, since remote access is required to track the user view and actions, detect errors and perform adjustments during experimental sessions. We designed IXCI as a research support tool to streamline immersive user studies by explicitly addressing these issues. Through a use case, we show how the tool effectively supports the researcher in tasks such as remote debugging, rapid prototyping, run-time parameter configuration, monitoring participants’ progress, providing help to users, recovering from errors and minimizing data loss.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126303861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tangible VR Lab: Studying User Interactionin Space and Time Morphing Scenarios 有形虚拟现实实验室:研究用户交互在空间和时间变形的场景
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531182
Ana Rebelo, Rui Nóbrega, F. Birra
{"title":"Tangible VR Lab: Studying User Interactionin Space and Time Morphing Scenarios","authors":"Ana Rebelo, Rui Nóbrega, F. Birra","doi":"10.1145/3531073.3531182","DOIUrl":"https://doi.org/10.1145/3531073.3531182","url":null,"abstract":"Virtual Reality (VR) offers an interesting interface for exploring changes in space and time that otherwise could be difficult to simulate in the real world. It becomes possible to distort the virtual world by increasing or diminishing distances, as well as to play with time delays. In this way, it is possible to create different spatiotemporal conditions to study different interaction techniques to analyse which ones are more suitable for each task. Related work has revealed an easy adaptation of human beings to interaction methods dissimilar from everyday conditions. In particular, hyperbolic spaces have shown unique properties for intuitive navigation over an area seemingly larger and less restricted than Euclidean spaces. Research on delay tolerance also suggests humans’ inability to detect slight discrepancies between visual and proprioceptive sensory information during the interaction. This work aims to create a tangible Virtual Environment (VE) to explore users’ adaptability in spatiotemporal distortion scenarios. As a case study, we restricted the scope of the investigation to two morphing scenarios. The Space Morphing Scenario compares the adaptability of users to hyperbolic versus Euclidean spaces. The Time Morphing Scenario intends to ascertain from which visual delay values the task performance is affected. The results showed significant differences between Euclidean space and hyperbolic space. Regarding the visual feedback, although participants find the task more difficult with delay values starting at 500 ms, the results show a decrease in performance as early as the 200 ms delay.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126555506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Multi-Modal Approach to Creating Routines for Smart Speakers 为智能扬声器创建例程的多模态方法
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531168
B. R. Barricelli, D. Fogli, Letizia Iemmolo, A. Locoro
{"title":"A Multi-Modal Approach to Creating Routines for Smart Speakers","authors":"B. R. Barricelli, D. Fogli, Letizia Iemmolo, A. Locoro","doi":"10.1145/3531073.3531168","DOIUrl":"https://doi.org/10.1145/3531073.3531168","url":null,"abstract":"Smart speakers can execute user-defined routines, namely, sequences of actions triggered by specific events or conditions. This paper presents a new approach to the creation of routines, which leverages the multi-modal features (vision, speech, and touch) offered by Amazon Alexa running on Echo Show devices. It then illustrates how end users found easier to create routines with the proposed approach than with the usual interaction with the Alexa app.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"83 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120919686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Phygital Toolkit for Rapidly Designing Smart Things at School 在学校快速设计智能设备的数字工具包
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531119
R. Gennari, M. Matera, Diego Morra, Mehdi Rizvi
{"title":"A Phygital Toolkit for Rapidly Designing Smart Things at School","authors":"R. Gennari, M. Matera, Diego Morra, Mehdi Rizvi","doi":"10.1145/3531073.3531119","DOIUrl":"https://doi.org/10.1145/3531073.3531119","url":null,"abstract":"Designing smart things, so as to ideate and program them, is a complex process and an empowerment opportunity. Toolkits can help non-experts engage in their design, that is, in their ideation and programming. This paper presents the IoTgo toolkit, made of card-based material and digital components. Playing with IoTgo can help understand the design context, ideate smart things for it, make them communicate and program them with patterns. In recent years, IoTgo was used and evolved at a distance with few participants. This paper presents a study with it, for the first time in presence, in a high-school. Results are discussed and suggest what role IoTgo or similar toolkits can play in the rapid design of smart things with non-seasoned designers or programmers.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128130952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visuo-Haptic Interaction Visuo-Haptic交互
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3535260
Katrin Wolf, Marco Kurzweg, Yannick Weiss, S. Brewster, Albrecht Schmidt
{"title":"Visuo-Haptic Interaction","authors":"Katrin Wolf, Marco Kurzweg, Yannick Weiss, S. Brewster, Albrecht Schmidt","doi":"10.1145/3531073.3535260","DOIUrl":"https://doi.org/10.1145/3531073.3535260","url":null,"abstract":"While traditional interfaces in human-computer interaction mainly rely on vision and audio, haptics becomes more and more important. Haptics cannot only increase the user experience and make technology more immersive, it can also transmit information that is hard to interpret only through vision and audio, such as the softness of a surface or other material properties. In this workshop, we aim at discussing how we could interact with technology if haptics is strongly supported and which novel research areas could emerge.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132727406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prototyping InContext: Exploring New Paradigms in User Experience Tools 情境中的原型:探索用户体验工具的新范例
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531175
A. R. L. Carter, M. Sturdee, Alan J. Dix
{"title":"Prototyping InContext: Exploring New Paradigms in User Experience Tools","authors":"A. R. L. Carter, M. Sturdee, Alan J. Dix","doi":"10.1145/3531073.3531175","DOIUrl":"https://doi.org/10.1145/3531073.3531175","url":null,"abstract":"The technologies we use in everyday contexts are designed and tested, using existing standards of usability. As technology advances standards are still based on planar displays and simple screen-based interactions. End-user digital devices need to consider context and physicality as additional influences on design. Additionally, accessibility and multi-modal interaction must be considered as we build technologies with interactions such as soundscapes to support user experience. When considering the tools we use to design existing interactions, we can evaluate new ways of working with software to support the development of the changing face of interactive devices. This paper presents two prototypes which explore the space of user experience design tools, first in the space of contextual cues when looking at multi device interaction, and second, in the space of physical prototyping. These prototypes are starting points for a wider discussion around the changing face of usability. We also discuss extending the scope of existing user experience design tools and rethinking what ”user experience” means when the devices we own are becoming ’aware’ of their surroundings, context, and have increasing agency.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133185482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeBORAh: A Web-Based Cross-Device Orchestration Layer 基于web的跨设备编排层
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534483
L. Vandenabeele, Hoorieh Afkari, J. Hermen, Louis Deladiennée, Christian Moll, V. Maquil
{"title":"DeBORAh: A Web-Based Cross-Device Orchestration Layer","authors":"L. Vandenabeele, Hoorieh Afkari, J. Hermen, Louis Deladiennée, Christian Moll, V. Maquil","doi":"10.1145/3531073.3534483","DOIUrl":"https://doi.org/10.1145/3531073.3534483","url":null,"abstract":"In this paper, we present DeBORAh, a front-end, web-based software layer supporting orchestration of interactive spaces combining multiple devices and screens. DeBORAh uses a flexible approach to decompose multimedia content into containers and webpages, so that they can be displayed in various ways across devices. We describe the concept and implementation of DeBORAh and explain how it has been used to instantiate two different cross-device scenarios.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115143120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信