Proceedings of the 2022 International Conference on Advanced Visual Interfaces最新文献

筛选
英文 中文
Exploring Manipulating In-VR Audio To Facilitate Verbal Interactions Between VR Users And Bystanders 探索操纵虚拟现实音频,以促进虚拟现实用户和旁观者之间的口头互动
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531079
Joseph O'Hagan, J. Williamson, M. Khamis, Mark Mcgill
{"title":"Exploring Manipulating In-VR Audio To Facilitate Verbal Interactions Between VR Users And Bystanders","authors":"Joseph O'Hagan, J. Williamson, M. Khamis, Mark Mcgill","doi":"10.1145/3531073.3531079","DOIUrl":"https://doi.org/10.1145/3531073.3531079","url":null,"abstract":"Despite recent work investigating how VR users can be made aware of bystanders, few have explored how bystander-VR user interactions may be facilitated by, for example, increasing the user’s auditory awareness so they can better converse with bystanders. Through a lab study (N=15) we investigated 4 approaches of manipulating in-VR audio to facilitate verbal interactions between a VR user and bystander: (1) dynamically reducing application volume, (2) removing background audio, (3) removing sound effects and (4) removing all audio. Our results show audio manipulations can be used to significantly improve a VR user’s auditory awareness at the cost of reducing sense of presence in VR. They also show most preferred increased awareness be balanced with decreased presence in VR, however, they also identify a subset of participants who prioritised increasing awareness no matter the cost to presence.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127979050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Extended UTAUT model to analyze the acceptance of virtual assistant’s recommendations using interactive visualisations 扩展UTAUT模型,使用交互式可视化分析虚拟助手建议的接受程度
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531129
S. Valtolina, Ricardo Anibal Matamoros Aragon, Elia Musiu, F. Epifania, Mattia Villa
{"title":"Extended UTAUT model to analyze the acceptance of virtual assistant’s recommendations using interactive visualisations","authors":"S. Valtolina, Ricardo Anibal Matamoros Aragon, Elia Musiu, F. Epifania, Mattia Villa","doi":"10.1145/3531073.3531129","DOIUrl":"https://doi.org/10.1145/3531073.3531129","url":null,"abstract":"The use of learning objects (LOs) to create digital courses has been widely advocated by learning strategists and by teachers engaged in the e-learning domain. The ability to combine chunks of learning material as to meet complex educational requirements is still a challenge. This paper explores the idea that a learning assistant advises teachers about the e-learning modules to take into account for their courses. An AI-based digital assistant can provide significant opportunities, but might be perceived as a threat. The paper presents how teacher could perceive a virtual assistant as more trustworthy when it applies interactive visual strategies. To analyze teachers’ acceptance of the digital assistant, our proposal aims at extending the Unified Theory of Acceptance and Use of Technology (UTAUT) model in order to incorporate three new constructors: Communicability, perceived trust and experience. To this end, 14 teachers have been involved in a user tests.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132486932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ReFlex Framework: Rapid Prototyping for Elastic Displays 反射框架:弹性显示的快速原型设计
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534482
Mathias Müller, D. Kammer, Lasse Grimm, Konrad Fabian, Diana Simon
{"title":"ReFlex Framework: Rapid Prototyping for Elastic Displays","authors":"Mathias Müller, D. Kammer, Lasse Grimm, Konrad Fabian, Diana Simon","doi":"10.1145/3531073.3534482","DOIUrl":"https://doi.org/10.1145/3531073.3534482","url":null,"abstract":"Creating innovative applications for novel interface technologies poses several challenges for both designers and developers. Rapid prototyping helps to examine concepts, check hypotheses, and acquire early user feedback. In this paper, we present ReFlex, a framework that provides rapid prototyping and development tools for Elastic Displays, and its application in the context of a student project developing a spatial controller for remote 3D navigation.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131403525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Advanced Visual Interfaces for Augmented Video 增强视频的高级视觉界面
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3535253
M. Coccoli, Ilenia Galluccio, Ilaria Torre, F. Amenduni, Alberto A. P. Cattaneo, Christopher Clarke
{"title":"Advanced Visual Interfaces for Augmented Video","authors":"M. Coccoli, Ilenia Galluccio, Ilaria Torre, F. Amenduni, Alberto A. P. Cattaneo, Christopher Clarke","doi":"10.1145/3531073.3535253","DOIUrl":"https://doi.org/10.1145/3531073.3535253","url":null,"abstract":"The growing use of online videos across a wide range of applications, including education and training, demands new approaches to enhance their utility and the user experience, and to minimise or overcome their limitations. In addition, these new approaches must consider the needs of users with different requirements, abilities, and usage contexts. Advances in human-computer interaction, immersive video, artificial intelligence and adaptive systems can be effectively exploited to this aim, opening up exciting opportunities for enhancing the video medium. The purpose of this workshop is to bring together experts in the fields above and from popular application domains in order to provide a forum for discussing the current state-of-the-art and requirements for specific application domains, in addition to proposing experimental and theoretical approaches.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131415489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soma Design – Intertwining Aesthetics, Ethics and Movement Soma设计——交织美学、伦理和运动
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3538400
K. Höök
{"title":"Soma Design – Intertwining Aesthetics, Ethics and Movement","authors":"K. Höök","doi":"10.1145/3531073.3538400","DOIUrl":"https://doi.org/10.1145/3531073.3538400","url":null,"abstract":"I will discuss soma design — a process that allows designers to examine and improve on connections between sensation, feeling, emotion, subjective understanding and values. Soma design builds on pragmatics and in particular on somaesthetics by Shusterman. It combines soma as in our first-person sensual experience of the world, with aesthetics as in deepening our knowledge of our sensory experiences to live a better life. In my talk, I will discuss how aesthetics and ethics are enacted in a soma design process. Our cultural practices and digitally-enabled objects enforce a form of sedimented, agreed-upon movements, enabling variation, but with certain prescribed ways to act, feel and think. This leaves designers with a great responsibility, as these become the movements that we invite our end-users to engage with, in turning shaping them, their movements, their bodies, their feelings and thoughts. I will argue that by engaging in a soma design process we can better probe which movements lead to deepened somatic awareness; social awareness of others in the environment and how they are affected by the human-technology assemblage; enactments of bodily freedoms rather than limitations; making norms explicit; engage with a pluralist feminist position on who we are designing for; and aesthetic experience and expression.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131581992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive DoF: Concepts to Visualize AI-generated Movements in Human-Robot Collaboration 自适应自由度:在人机协作中可视化人工智能生成运动的概念
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534479
Max Pascher, Kirill Kronhardt, Til Franzen, J. Gerken
{"title":"Adaptive DoF: Concepts to Visualize AI-generated Movements in Human-Robot Collaboration","authors":"Max Pascher, Kirill Kronhardt, Til Franzen, J. Gerken","doi":"10.1145/3531073.3534479","DOIUrl":"https://doi.org/10.1145/3531073.3534479","url":null,"abstract":"Nowadays, robots collaborate closely with humans in a growing number of areas. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, supporting people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior. This, however, is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their motion intent and comprehending how they ”think” about their actions. We work on solutions that communicate the cobots AI-generated motion intent to a human collaborator. Effective communication enables users to proceed with the most suitable option. We present a design exploration with different visualization techniques to optimize this user understanding, ideally resulting in increased safety and end-user acceptance.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121904589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Viewing Browser Content from Extreme Angles with Partial Perspective Corrections 从极端角度查看浏览器内容,并进行部分透视校正
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534474
F. Sandnes
{"title":"Viewing Browser Content from Extreme Angles with Partial Perspective Corrections","authors":"F. Sandnes","doi":"10.1145/3531073.3534474","DOIUrl":"https://doi.org/10.1145/3531073.3534474","url":null,"abstract":"Displays viewed from extreme angles become perceptually distorted due to perspective projections. Perspective correction has been discussed as a remedy, yet there are few general and practical implementations available to users. This work explores modern browser technology for realizing perspective correction in practice, and whether partial perspective corrections is feasible. Three prototypes are explored, two where the corrections are configured manually and one where the corrections are determined automatically through camera-based vantage point measurements. Browser transformations are used to perform perspective corrections to content in real-time.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127850817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed Reality Communication System for Procedural Tasks 程序任务混合现实通信系统
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534497
M. Rebol, C. Ranniger, C. Hood, E. Horan, A. Rutenberg, N. Sikka, Y. Ajabnoor, Safinaz Alshikah, Krzysztof Pietroszek
{"title":"Mixed Reality Communication System for Procedural Tasks","authors":"M. Rebol, C. Ranniger, C. Hood, E. Horan, A. Rutenberg, N. Sikka, Y. Ajabnoor, Safinaz Alshikah, Krzysztof Pietroszek","doi":"10.1145/3531073.3534497","DOIUrl":"https://doi.org/10.1145/3531073.3534497","url":null,"abstract":"We design a volumetric communication system for remote assistance of procedural medical tasks. The system allows a remote expert to spatially guide a local operator using a real-time volumetric representation of the patient. Guidance is provided by voice, virtual hand metaphor, and annotations performed in situ. We include the feedback we received from the medical professionals and early NASA TLX [5] data on the cognitive load of the system.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125825887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Corpus Summarization and Exploration using Multi-Mosaics 基于多马赛克的语料库总结与探索
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534468
Shane Sheehan, S. Luz, M. Masoodian
{"title":"Corpus Summarization and Exploration using Multi-Mosaics","authors":"Shane Sheehan, S. Luz, M. Masoodian","doi":"10.1145/3531073.3534468","DOIUrl":"https://doi.org/10.1145/3531073.3534468","url":null,"abstract":"In fields such as translation studies and computational linguistics, various tools are used to analyze the content of text corpora, and extract keywords and other entities for analysis. Concordancing – arranging passages of text corpus in alphabetical order of user-defined keywords – is one of most widely used forms of text analysis. This paper describes Multi-Mosaics, a tool for text analysis using multiple implicitly linked Concordance Mosaic visualisations. Multi-Mosaics supports examining linguistic relationships within the context windows surrounding multiple extracted keywords.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130082485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Composites: A Tangible Interaction Paradigm for Visual Data Analysis in Design Practice 复合材料:设计实践中视觉数据分析的有形交互范式
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531091
Hariharan Subramonyam, Eytan Adar, S. Drucker
{"title":"Composites: A Tangible Interaction Paradigm for Visual Data Analysis in Design Practice","authors":"Hariharan Subramonyam, Eytan Adar, S. Drucker","doi":"10.1145/3531073.3531091","DOIUrl":"https://doi.org/10.1145/3531073.3531091","url":null,"abstract":"Conventional tools for visual analytics emphasize a linear production workflow and lack organic “work surfaces.” A better surface would simultaneously support collaborative visualization construction, data and design exploration, and reasoning. To facilitate data-driven design within existing design tools such as card sorting, we introduce Composites, a tangible, augmented reality interface for constructing visualizations on large surfaces. In response to the placement of physical sticky-notes, Composites projects visualizations and data onto large surfaces. Our spatial grammar allows the designer to flexibly construct visualizations through the use of the notes. Similar to affinity-diagramming, the designer can “connect” the physical notes to data, operations, and visualizations which can then be re-arranged based on creative needs. We develop mechanisms (sticky interactions, visual hinting, etc.) to provide guiding feedback to the end-user. By leveraging low-cost technology, Composites extends a working surface to support a broad range of workflows without limiting creative design thinking.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124042003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信