Proceedings of the ACM on Human-Computer Interaction最新文献

筛选
英文 中文
Seeing the Wind: An Interactive Mist Interface for Airflow Input 看到风:气流输入的交互式雾界面
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2023-10-31 DOI: 10.1145/3626480
Tian Min, Chengshuo Xia, Takumi Yamamoto, Yuta Sugiura
{"title":"Seeing the Wind: An Interactive Mist Interface for Airflow Input","authors":"Tian Min, Chengshuo Xia, Takumi Yamamoto, Yuta Sugiura","doi":"10.1145/3626480","DOIUrl":"https://doi.org/10.1145/3626480","url":null,"abstract":"Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"50 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135928982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch 跨显示和空间的交互:对Twitch上的虚拟现实流媒体实践的研究
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2023-10-31 DOI: 10.1145/3626473
Liwei Wu, Qing Liu, Jian Zhao, Edward Lank
{"title":"Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch","authors":"Liwei Wu, Qing Liu, Jian Zhao, Edward Lank","doi":"10.1145/3626473","DOIUrl":"https://doi.org/10.1145/3626473","url":null,"abstract":"The growing live streaming economy and virtual reality (VR) technologies have sparked interest in VR streaming among streamers and viewers. However, limited research has been conducted to understand this emerging streaming practice. To address this gap, we conducted an in-depth thematic analysis of 34 streaming videos from 12 VR streamers with varying levels of experience, to explore the current practices, interaction styles, and strategies, as well as to investigate the challenges and opportunities for VR streaming. Our findings indicate that VR streamers face challenges in building emotional connections and maintaining streaming flow due to technical problems, lack of fluid transitions between physical and virtual environments, and not intentionally designed game scenes. As a response, we propose six design implications to encourage collaboration between game designers and streaming app developers, facilitating fluid, rich, and broad interactions for an enhanced streaming experience. In addition, we discuss the use of streaming videos as user-generated data for research, highlighting the lessons learned and emphasizing the need for tools to support streaming video analysis. Our research sheds light on the unique aspects of VR streaming, which combines interactions across displays and space.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"301 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135929127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CADTrack: Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects CADTrack:近对称物体方向消歧的指令与支持
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2023-10-31 DOI: 10.1145/3626462
João Marcelo Evangelista Belo, Jon Wissing, Tiare Feuchtner, Kaj Grønbæk
{"title":"CADTrack: Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects","authors":"João Marcelo Evangelista Belo, Jon Wissing, Tiare Feuchtner, Kaj Grønbæk","doi":"10.1145/3626462","DOIUrl":"https://doi.org/10.1145/3626462","url":null,"abstract":"Determining the correct orientation of objects can be critical to succeed in tasks like assembly and quality assurance. In particular, near-symmetrical objects may require careful inspection of small visual features to disambiguate their orientation. We propose CADTrack, a digital assistant for providing instructions and support for tasks where the object orientation matters but may be hard to disambiguate with the naked eye. Additionally, we present a deep learning pipeline for tracking the orientation of near-symmetrical objects. In contrast to existing approaches, which require labeled datasets involving laborious data acquisition and annotation processes, CADTrack uses a digital model of the object to generate synthetic data and train a convolutional neural network. Furthermore, we extend the architecture of Mask R-CNN with a confidence prediction branch to avoid errors caused by misleading orientation guidance. We evaluate CADTrack in a user study, comparing our tracking-based instructions to other methods to confirm the benefits of our approach in terms of preference and required effort.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"42 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135928211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames 稀疏多视图帧运动视频中物体的交互式三维标注
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2023-10-31 DOI: 10.1145/3626476
Kotaro Oomori, Wataru Kawabe, Fabrice Matulic, Takeo Igarashi, Keita Higuchi
{"title":"Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames","authors":"Kotaro Oomori, Wataru Kawabe, Fabrice Matulic, Takeo Igarashi, Keita Higuchi","doi":"10.1145/3626476","DOIUrl":"https://doi.org/10.1145/3626476","url":null,"abstract":"Segmenting and determining the 3D bounding boxes of objects of interest in RGB videos is an important task for a variety of applications such as augmented reality, navigation, and robotics. Supervised machine learning techniques are commonly used for this, but they need training datasets: sets of images with associated 3D bounding boxes manually defined by human annotators using a labelling tool. However, precisely placing 3D bounding boxes can be difficult using conventional 3D manipulation tools on a 2D interface. To alleviate that burden, we propose a novel technique with which 3D bounding boxes can be created by simply drawing 2D bounding rectangles on multiple frames of a video sequence showing the object from different angles. The method uses reconstructed dense 3D point clouds from the video and computes tightly fitting 3D bounding boxes of desired objects selected by back-projecting the 2D rectangles. We show concrete application scenarios of our interface, including training dataset creation and editing 3D spaces and videos. An evaluation comparing our technique with a conventional 3D annotation tool shows that our method results in higher accuracy. We also confirm that the bounding boxes created with our interface have a lower variance, likely yielding more consistent labels and datasets.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135928378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback 虚拟现实中与视觉、听觉和触觉振动反馈的飞机座舱交互
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2023-10-31 DOI: 10.1145/3626481
Stefan Auer, Christoph Anthes, Harald Reiterer, Hans-Christian Jetter
{"title":"Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback","authors":"Stefan Auer, Christoph Anthes, Harald Reiterer, Hans-Christian Jetter","doi":"10.1145/3626481","DOIUrl":"https://doi.org/10.1145/3626481","url":null,"abstract":"Safety-critical interactive spaces for supervision and time-critical control tasks are usually characterized by many small displays and physical controls, typically found in control rooms or automotive, railway, and aviation cockpits. Using Virtual Reality (VR) simulations instead of a physical system can significantly reduce the training costs of these interactive spaces without risking real-world accidents or occupying expensive physical simulators. However, the user's physical interactions and feedback methods must be technologically mediated. Therefore, we conducted a within-subjects study with 24 participants and compared performance, task load, and simulator sickness during training of authentic aircraft cockpit manipulation tasks. The participants were asked to perform these tasks inside a VR flight simulator (VRFS) for three feedback methods (acoustic, haptic, and acoustic+haptic) and inside a physical flight simulator (PFS) of a commercial airplane cockpit. The study revealed a partial equivalence of VRFS and PFS, control-specific differences input elements, irrelevance of rudimentary vibrotactile feedback, slower movements in VR, as well as a preference for PFS.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"320 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135928981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embodied Provenance for Immersive Sensemaking 沉浸式意义制造的具体来源
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2023-10-31 DOI: 10.1145/3626471
Yidan Zhang, Barrett Ens, Kadek Ananta Satriadi, Ying Yang, Sarah Goodwin
{"title":"Embodied Provenance for Immersive Sensemaking","authors":"Yidan Zhang, Barrett Ens, Kadek Ananta Satriadi, Ying Yang, Sarah Goodwin","doi":"10.1145/3626471","DOIUrl":"https://doi.org/10.1145/3626471","url":null,"abstract":"Immersive analytics research has explored how embodied data representations and interactions can be used to engage users in sensemaking. Prior research has broadly overlooked the potential of immersive space for supporting analytic provenance, the understanding of sensemaking processes through users’ interaction histories. We propose the concept of embodied provenance, the use of three-dimensional space and embodied interactions in supporting recalling, reproducing, annotating and sharing analysis history in immersive environments. We design a conceptual framework for embodied provenance by highlighting a set of design criteria for analytic provenance drawn from prior work and identifying essential properties for embodied provenance. We develop a prototype system in virtual reality to demonstrate the concept and support the conceptual framework by providing multiple data views and embodied interaction metaphors in a large virtual space. We present a use case scenario of energy consumption analysis and evaluated the system through a qualitative evaluation with 17 participants, which show the system’s potential for assisting analytic provenance using embodiment. Our exploration of embodied provenance through this prototype provides lessons learnt to guide the design of immersive analytic tools for embodied provenance.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"68 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135929279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions 使用在线视频作为开发设计指南的基础:基于ar的装配说明的案例研究
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2023-10-31 DOI: 10.1145/3626464
Niu Chen, Frances Jihae Sin, Laura Mariah Herman, Cuong Nguyen, Ivan Song, Dongwook Yoon
{"title":"Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions","authors":"Niu Chen, Frances Jihae Sin, Laura Mariah Herman, Cuong Nguyen, Ivan Song, Dongwook Yoon","doi":"10.1145/3626464","DOIUrl":"https://doi.org/10.1145/3626464","url":null,"abstract":"Design guidelines serve as an important conceptual tool to guide designers of interactive applications with well-established principles and heuristics. Consulting domain experts is a common way to develop guidelines. However, experts are often not easily accessible, and their time can be expensive. This problem poses challenges in developing comprehensive and practical guidelines. We propose a new guideline development method that uses online public videos as the basis for capturing diverse patterns of design goals and interaction primitives. In a case study focusing on AR-based assembly instructions, we apply our novel Identify-Rationalize pipeline, which distills design patterns from videos featuring AR-based assembly instructions (N=146) into a set of guidelines that cover a wide range of design considerations. The evaluation conducted with 16 AR designers indicated that the pipeline is useful for generating comprehensive guidelines. We conclude by discussing the transferability and practicality of our method.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"3 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135928225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality BrickStARt:使用混合现实实现现场设计和个人制造的有形探索
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2023-10-31 DOI: 10.1145/3626465
Stemasov, Evgeny, Hohn, Jessica, Cordts, Maurice, Schikorr, Anja, Rukzio, Enrico, Gugenheimer, Jan
{"title":"BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality","authors":"Stemasov, Evgeny, Hohn, Jessica, Cordts, Maurice, Schikorr, Anja, Rukzio, Enrico, Gugenheimer, Jan","doi":"10.1145/3626465","DOIUrl":"https://doi.org/10.1145/3626465","url":null,"abstract":"3D-printers enable end-users to design and fabricate unique physical artifacts but maintain an increased entry barrier and friction. End users must design tangible artifacts through intangible media away from the main problem space (ex-situ) and transfer spatial requirements to an abstract software environment. To allow users to evaluate dimensions, balance, or fit early and in-situ, we developed BrickStARt, a design tool using tangible construction blocks paired with a mixed-reality headset. Users assemble a physical block model at the envisioned location of the fabricated artifact. Designs can be tested tangibly, refined, and digitally post-processed, remaining continuously in-situ. We implemented BrickStARt using a Magic Leap headset and present walkthroughs, highlighting novel interactions for 3D-design. In a user study (n=16), first-time 3D-modelers succeeded more often using BrickStARt than Tinkercad. Our results suggest that BrickStARt provides an accessible and explorative process while facilitating quick, tangible design iterations that allow users to detect physics-related issues (e.g., clearance) early on.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"5 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135765699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clarifying the Effect of Edge Targets in Touch Pointing through Crowdsourced Experiments 通过众包实验阐明边缘目标在触摸指向中的作用
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2023-10-31 DOI: 10.1145/3626469
Hiroki Usuba, Shota Yamanaka, Junichi Sato
{"title":"Clarifying the Effect of Edge Targets in Touch Pointing through Crowdsourced Experiments","authors":"Hiroki Usuba, Shota Yamanaka, Junichi Sato","doi":"10.1145/3626469","DOIUrl":"https://doi.org/10.1145/3626469","url":null,"abstract":"A prior work has recommended adding a 4-mm gap between a target and the edge of a screen, as tapping a target located at the screen edge takes longer than tapping non-edge targets. However, it is possible that this recommendation was created based on statistical errors, and unexplored situations existed in the prior work. In this study, we re-examine the recommendation by utilizing crowdsourced experiments to resolve the issues. If we observe the same results as the prior work through experiments including diversities, we can verify that the recommendation is suitable. We found that increasing the gap between the target and the screen edge decreased the movement time, which was consistent with the prior work. In addition, we newly found that increasing the gap decreased the error rate as well. On the basis of these results, we discuss how the gap and the target should be designed.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135929886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the Effects of Movement Direction on 2D Touch Pointing Tasks 理解移动方向对2D触控任务的影响
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2023-10-31 DOI: 10.1145/3626482
Xinyong Zhang
{"title":"Understanding the Effects of Movement Direction on 2D Touch Pointing Tasks","authors":"Xinyong Zhang","doi":"10.1145/3626482","DOIUrl":"https://doi.org/10.1145/3626482","url":null,"abstract":"HCI researchers have long recognized the significant effects of movement direction on human performance, and this factor has been carefully addressed to benefit user interface design. According to our previous study (2012), the weights of the two target dimensions, width W and height H, in the extended index of difficulty (ID) for 2D pointing tasks are asymmetric and appear to vary periodically based on movement direction (θ), following a cosine function. However, this periodic effect of movement direction is uncertain for direct 2D touch pointing tasks, and a thorough understanding of the effects of movement direction on direct pointing tasks, such as on touch input surfaces, is still lacking. In this paper, we conducted two experiments on a 24-inch touch screen, with tilted and horizontal orientations respectively, to confirm the periodic effect in the context of direct pointing and illustrate its variations across different pointing tasks. At the same time, we propose a quantification formula to measure the real differences in task difficulty caused by the direction factor. To the best of our knowledge, this is the first study to do so. Using this formula, the ID values in different directions can be unified to the same scale and compared, providing a new perspective for understanding and evaluating human performance in different interaction environments.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135930038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信