Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology最新文献

筛选
英文 中文
interiqr: Unobtrusive Edible Tags using Food 3D Printing 使用食品3D打印的不显眼的可食用标签
Yamato Miyatake, Parinya Punpongsanon, D. Iwai, Kosuke Sato
{"title":"interiqr: Unobtrusive Edible Tags using Food 3D Printing","authors":"Yamato Miyatake, Parinya Punpongsanon, D. Iwai, Kosuke Sato","doi":"10.1145/3526113.3545669","DOIUrl":"https://doi.org/10.1145/3526113.3545669","url":null,"abstract":"We present interiqr, a method that utilizes the infill parameter in the 3D printing process to embed information inside the food that is difficult to recognize with the human eye. Our key idea is to utilize the air space or secondary materials to generate a specific pattern inside the food without changing the model geometry. As a result, our method exploits the patterns that appear as hidden edible tags to store the data and simultaneously adds them to a 3D printing pipeline. Our contribution also includes the framework that connects the user with a data-embedding interface through the food 3D printing process, and the decoding system allows the user to decode the information inside the 3D printed food through backlight illumination and a simple image processing technique. Finally, we evaluate the usability of our method under different settings and demonstrate our method through the example application scenarios.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115415811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Muscle Synergies Learning with Electrical Muscle Stimulation for Playing the Piano 用肌肉电刺激学习钢琴的肌肉协同作用
Arinobu Niijima, Toki Takeda, Ryosuke Aoki, Shinji Miyahara
{"title":"Muscle Synergies Learning with Electrical Muscle Stimulation for Playing the Piano","authors":"Arinobu Niijima, Toki Takeda, Ryosuke Aoki, Shinji Miyahara","doi":"10.1145/3526113.3545666","DOIUrl":"https://doi.org/10.1145/3526113.3545666","url":null,"abstract":"When playing scales on the piano, playing all notes evenly is a basic technique to improve the quality of music. However, it is difficult for beginners to do this because they need to achieve appropriate muscle synergies of the forearm and shoulder muscles, i.e., pressing keys as well as sliding their hands sideways. In this paper, we propose a system using electrical muscle stimulation (EMS) to teach beginners how to improve their muscle synergies while playing scales. We focus on “thumb-under” method and assist with it by applying EMS to the deltoid muscle. We conducted a user study to investigate whether our EMS-based system can help beginners learn new muscle synergies in playing ascending scales. We divided the participants into two groups: an experimental group that practiced with EMS and a control group that practiced without EMS. The results showed that practicing with EMS was more effective in improving the evenness of scales than without EMS and that the muscle synergies changed after practicing.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"11 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120986222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Look over there! Investigating Saliency Modulation for Visual Guidance with Augmented Reality Glasses 看那边!增强现实眼镜视觉引导的显著调制研究
Jonathan Sutton, T. Langlotz, Alexander Plopski, S. Zollmann, Yuta Itoh, H. Regenbrecht
{"title":"Look over there! Investigating Saliency Modulation for Visual Guidance with Augmented Reality Glasses","authors":"Jonathan Sutton, T. Langlotz, Alexander Plopski, S. Zollmann, Yuta Itoh, H. Regenbrecht","doi":"10.1145/3526113.3545633","DOIUrl":"https://doi.org/10.1145/3526113.3545633","url":null,"abstract":"Augmented Reality has traditionally been used to display digital overlays in real environments. Many AR applications such as remote collaboration, picking tasks, or navigation require highlighting physical objects for selection or guidance. These highlights use graphical cues such as outlines and arrows. Whilst effective, they greatly contribute to visual clutter, possibly occlude scene elements, and can be problematic for long-term use. Substituting those overlays, we explore saliency modulation to accentuate objects in the real environment to guide the user’s gaze. Instead of manipulating video streams, like done in perception and cognition research, we investigate saliency modulation of the real world using optical-see-through head-mounted displays. This is a new challenge, since we do not have full control over the view of the real environment. In this work we provide our specific solution to this challenge, including built prototypes and their evaluation.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130341857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Phrase-Gesture Typing on Smartphones 智能手机上的短语手势输入
Zheer Xu, Yankang Meng, Xiaojun Bi, Xing-Dong Yang
{"title":"Phrase-Gesture Typing on Smartphones","authors":"Zheer Xu, Yankang Meng, Xiaojun Bi, Xing-Dong Yang","doi":"10.1145/3526113.3545683","DOIUrl":"https://doi.org/10.1145/3526113.3545683","url":null,"abstract":"We study phrase-gesture typing, a gesture typing method that allows users to type short phrases by swiping through all the letters of the words in a phrase using a single, continuous gesture. Unlike word-gesture typing, where text needs to be entered word by word, phrase-gesture typing enters text phrase by phrase. To demonstrate the usability of phrase-gesture typing, we implemented a prototype called PhraseSwipe. Our system is composed of a frontend interface designed specifically for typing through phrases and a backend phrase-level gesture decoder developed based on a transformer-based neural language model. Our decoder was trained using five million phrases of varying lengths of up to five words, chosen randomly from the Yelp Review Dataset. Through a user study with 12 participants, we demonstrate that participants could type using PhraseSwipe at an average speed of 34.5 WPM with a Word Error Rate of 1.1%.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125636751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color-to-Depth Mappings as Depth Cues in Virtual Reality 颜色到深度映射作为虚拟现实中的深度线索
Zhipeng Li, Yikai Cui, Tianze Zhou, Yu Jiang, Yuntao Wang, Yukang Yan, Michael Nebeling, Yuanchun Shi
{"title":"Color-to-Depth Mappings as Depth Cues in Virtual Reality","authors":"Zhipeng Li, Yikai Cui, Tianze Zhou, Yu Jiang, Yuntao Wang, Yukang Yan, Michael Nebeling, Yuanchun Shi","doi":"10.1145/3526113.3545646","DOIUrl":"https://doi.org/10.1145/3526113.3545646","url":null,"abstract":"Despite significant improvements to Virtual Reality (VR) technologies, most VR displays are fixed focus and depth perception is still a key issue that limits the user experience and the interaction performance. To supplement humans’ inherent depth cues (e.g., retinal blur, motion parallax), we investigate users’ perceptual mappings of distance to virtual objects’ appearance to generate visual cues aimed to enhance depth perception. As a first step, we explore color-to-depth mappings for virtual objects so that their appearance differs in saturation and value to reflect their distance. Through a series of controlled experiments, we elicit and analyze users’ strategies of mapping a virtual object’s hue, saturation, value and a combination of saturation and value to its depth. Based on the collected data, we implement a computational model that generates color-to-depth mappings fulfilling adjustable requirements on confusion probability, number of depth levels, and consistent saturation/value changing tendency. We demonstrate the effectiveness of color-to-depth mappings in a 3D sketching task, showing that compared to single-colored targets and strokes, with our mappings, the users were more confident in the accuracy without extra cognitive load and reduced the perceived depth error by 60.8%. We also implement four VR applications and demonstrate how our color cues can benefit the user experience and interaction performance in VR.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115952904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
DEEP: 3D Gaze Pointing in Virtual Reality Leveraging Eyelid Movement DEEP:利用眼睑运动的虚拟现实3D凝视指向
Xin Yi, Leping Qiu, Wenjing Tang, Yehan Fan, Hewu Li, Yuanchun Shi
{"title":"DEEP: 3D Gaze Pointing in Virtual Reality Leveraging Eyelid Movement","authors":"Xin Yi, Leping Qiu, Wenjing Tang, Yehan Fan, Hewu Li, Yuanchun Shi","doi":"10.1145/3526113.3545673","DOIUrl":"https://doi.org/10.1145/3526113.3545673","url":null,"abstract":"Gaze-based target suffers from low input precision and target occlusion. In this paper, we explored to leverage the continuous eyelid movement to support high-efficient and occlusion-robust dwell-based gaze pointing in virtual reality. We first conducted two user studies to examine the users’ eyelid movement pattern both in unintentional and intentional conditions. The results proved the feasibility of leveraging intentional eyelid movement that was distinguishable with natural movements for input. We also tested the participants’ dwelling pattern for targets with different sizes and locations. Based on these results, we propose DEEP, a novel technique that enables the users to see through occlusions by controlling the aperture angle of their eyelids and dwell to select the targets with the help of a probabilistic input prediction model. Evaluation results showed that DEEP with dynamic depth and location selection incorporation significantly outperformed its static variants, as well as a naive dwelling baseline technique. Even for 100% occluded targets, it could achieve an average selection speed of 2.5s with an error rate of 2.3%.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134114538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DiscoBand: Multiview Depth-Sensing Smartwatch Strap for Hand, Body and Environment Tracking disband:用于手部、身体和环境跟踪的多视图深度感应智能表带
Nathan Devrio, Chris Harrison
{"title":"DiscoBand: Multiview Depth-Sensing Smartwatch Strap for Hand, Body and Environment Tracking","authors":"Nathan Devrio, Chris Harrison","doi":"10.1145/3526113.3545634","DOIUrl":"https://doi.org/10.1145/3526113.3545634","url":null,"abstract":"Real-time tracking of a user’s hands, arms and environment is valuable in a wide variety of HCI applications, from context awareness to virtual reality. Rather than rely on fixed and external tracking infrastructure, the most flexible and consumer-friendly approaches are mobile, self-contained, and compatible with popular device form factors (e.g., smartwatches). In this vein, we contribute DiscoBand, a thin sensing strap not exceeding 1 cm in thickness. Sensors operating so close to the skin inherently face issues with occlusion. To help overcome this, our strap uses eight distributed depth sensors imaging the hand from different viewpoints, creating a sparse 3D point cloud. An additional eight depth sensors image outwards from the band to track the user’s body and surroundings. In addition to evaluating arm and hand pose tracking, we also describe a series of supplemental applications powered by our band’s data, including held object recognition and environment mapping.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"239 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132612412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Integrating Real-World Distractions into Virtual Reality 将现实世界的干扰整合到虚拟现实中
Yujie Tao, Pedro Lopes
{"title":"Integrating Real-World Distractions into Virtual Reality","authors":"Yujie Tao, Pedro Lopes","doi":"10.1145/3526113.3545682","DOIUrl":"https://doi.org/10.1145/3526113.3545682","url":null,"abstract":"With the proliferation of consumer-level virtual reality (VR) devices, users started experiencing VR in less controlled environments, such as in social gatherings and public areas. While the current VR hardware provides an increasingly immersive experience, it ignores stimuli originating from the physical surroundings that distract users from the VR experience. To block distractions from the outside world, many users wear noise-canceling headphones. However, this is insufficient to block loud or transient sounds (e.g., drilling or hammering) and, especially, multi-modal distractions (e.g., air drafts, temperature shifts from an A/C, construction vibrations, or food smells). To tackle this, we explore a new concept, where we directly integrate the distracting stimuli from the user's physical surroundings into their virtual reality experience to enhance presence. Using our approach, an otherwise distracting wind gust can be directly mapped to the sway of trees in a VR experience that already contains trees. Using our novel approach, we demonstrate how to integrate a range of distractive stimuli into the VR experience, such as haptics (temperature, vibrations, touch), sounds, and smells. To validate our approach, we conducted three user studies and a technical evaluation. First, to validate our key principle, we conducted a controlled study where participants were exposed to distractions while playing a VR game. We found that our approach improved users’ sense of presence, compared to wearing noise-canceling headphones. From these results, we engineered a sensing module that detects a set of simple distractive signals (e.g., sounds, winds, and temperature shifts). We validated our hardware in a technical evaluation and in an out-of-lab study where participants played VR games in an uncontrolled environment. Moreover, to gather the perspective of VR content creators that might one day utilize a system inspired by our findings, we invited game designers to use our approach and collected their feedback and VR designs. Finally, we present design considerations for mapping distracting external stimuli and discuss ethical considerations of integrating real-world stimuli into virtual reality.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115335783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
RemoteLab: A VR Remote Study Toolkit RemoteLab:一个VR远程学习工具包
Jaewook Lee, Raahul Natarrajan, S. S. Rodríguez, Payod Panda, E. Ofek
{"title":"RemoteLab: A VR Remote Study Toolkit","authors":"Jaewook Lee, Raahul Natarrajan, S. S. Rodríguez, Payod Panda, E. Ofek","doi":"10.1145/3526113.3545679","DOIUrl":"https://doi.org/10.1145/3526113.3545679","url":null,"abstract":"User studies play a critical role in human subject research, including human-computer interaction. Virtual reality (VR) researchers tend to conduct user studies in-person at their laboratory, where participants experiment with novel equipment to complete tasks in a simulated environment, which is often new to many. However, due to social distancing requirements in recent years, VR research has been disrupted by preventing participants from attending in-person laboratory studies. On the other hand, affordable head-mounted displays are becoming common, enabling access to VR experiences and interactions outside traditional research settings. Recent research has shown that unsupervised remote user studies can yield reliable results, however, the setup of experiment software designed for remote studies can be technically complex and convoluted. We present a novel open-source Unity toolkit, RemoteLab, designed to facilitate the preparation of remote experiments by providing a set of tools that synchronize experiment state across multiple computers, record and collect data from various multimedia sources, and replay the accumulated data for analysis. This toolkit facilitates VR researchers to conduct remote experiments when in-person experiments are not feasible or increase the sampling variety of a target population and reach participants that otherwise would not be able to attend in-person.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124472803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Diffscriber: Describing Visual Design Changes to Support Mixed-Ability Collaborative Presentation Authoring 鉴别器:描述视觉设计变化以支持混合能力协同演示创作
Yi-Hao Peng, Jason Wu, Jeffrey P. Bigham, Amy Pavel
{"title":"Diffscriber: Describing Visual Design Changes to Support Mixed-Ability Collaborative Presentation Authoring","authors":"Yi-Hao Peng, Jason Wu, Jeffrey P. Bigham, Amy Pavel","doi":"10.1145/3526113.3545637","DOIUrl":"https://doi.org/10.1145/3526113.3545637","url":null,"abstract":"Visual slide-based presentations are ubiquitous, yet slide authoring tools are largely inaccessible to people who are blind or visually impaired (BVI). When authoring presentations, the 9 BVI presenters in our formative study usually work with sighted collaborators to produce visual slides based on the text content they produce. While BVI presenters valued collaborators’ visual design skill, the collaborators often felt they could not fully review and provide feedback on the visual changes that were made. We present Diffscriber, a system that identifies and describes changes to a slide’s content, layout, and style for presentation authoring. Using our system, BVI presentation authors can efficiently review changes to their presentation by navigating either a summary of high-level changes or individual slide elements. To learn more about changes of interest, presenters can use a generated change hierarchy to navigate to lower-level change details and element styles. BVI presenters using Diffscriber were able to identify slide design changes and provide feedback more easily as compared to using only the slides alone. More broadly, Diffscriber illustrates how advances in detecting and describing visual differences can improve mixed-ability collaboration.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"36 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132502744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信