Proceedings of the ACM on Human-Computer Interaction最新文献

筛选
英文 中文
Evaluation of Code Generation for Simulating Participant Behavior in Experience Sampling Method by Iterative In-Context Learning of a Large Language Model 通过对大型语言模型进行迭代式上下文学习,评估模拟体验取样法中参与者行为的代码生成情况
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2024-06-17 DOI: 10.1145/3661143
Alireza Khanshan, Pieter van Gorp, P. Markopoulos
{"title":"Evaluation of Code Generation for Simulating Participant Behavior in Experience Sampling Method by Iterative In-Context Learning of a Large Language Model","authors":"Alireza Khanshan, Pieter van Gorp, P. Markopoulos","doi":"10.1145/3661143","DOIUrl":"https://doi.org/10.1145/3661143","url":null,"abstract":"The Experience Sampling Method (ESM) is commonly used to understand behaviors, thoughts, and feelings in the wild by collecting self-reports. Sustaining sufficient response rates, especially in long-running studies remains challenging. To avoid low response rates and dropouts, experimenters rely on their experience, proposed methodologies from earlier studies, trial and error, or the scarcely available participant behavior data from previous ESM protocols. This approach often fails in finding the acceptable study parameters, resulting in redesigning the protocol and repeating the experiment. Research has shown the potential of machine learning to personalize ESM protocols such that ESM prompts are delivered at opportune moments, leading to higher response rates. The corresponding training process is hindered due to the scarcity of open data in the ESM domain, causing a cold start, which could be mitigated by simulating participant behavior. Such simulations provide training data and insights for the experimenters to update their study design choices. Creating this simulation requires behavioral science, psychology, and programming expertise. Large language models (LLMs) have emerged as facilitators for information inquiry and programming, albeit random and occasionally unreliable. We aspire to assess the readiness of LLMs in an ESM use case. We conducted research using GPT-3.5 turbo-16k to tackle an ESM simulation problem. We explored several prompt design alternatives to generate ESM simulation programs, evaluated the output code in terms of semantics and syntax, and interviewed ESM practitioners. We found that engineering LLM-enabled ESM simulations have the potential to facilitate data generation, but they perpetuate trust and reliability challenges.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"12 40","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does the Medium Matter? A Comparison of Augmented Reality Media in Instructing Novices to Perform Complex, Skill-Based Manual Tasks. 媒介重要吗?增强现实媒体在指导新手完成复杂的、基于技能的手动任务方面的比较。
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2024-06-17 DOI: 10.1145/3660249
H. Dhiman, Carsten Röcker
{"title":"Does the Medium Matter? A Comparison of Augmented Reality Media in Instructing Novices to Perform Complex, Skill-Based Manual Tasks.","authors":"H. Dhiman, Carsten Röcker","doi":"10.1145/3660249","DOIUrl":"https://doi.org/10.1145/3660249","url":null,"abstract":"Past research comparing augmented reality (AR) media such as in-situ projection and head-mounted devices (HMD) has usually considered simple manual activities. It is unknown whether previously reported differences between different AR media also apply to complex, skill-driven tasks. In this paper, we explore the feasibility and challenges in designing AR instructions for expertise-driven, skilled activities. We present findings from a real-world, between-subjects experiment in which novices were instructed to trim and bone sub-primal cuts of pork using two interactive AR prototypes, one utilizing in-situ projection and a second using the Hololens 2. The prototypes and instructions were designed in consultation with experts. We compared novices' task performance and subjective perceptions and gathered experts' feedback. Although both users and experts indicated a subjective preference for in-situ projection, results indicate that when tasks require knowledge, skill and expertise, the choice of the AR medium itself may not be consequential. Rather, in our experiment, the instruction quality influenced comprehension, knowledge retention and task performance. Hence, from an engineering perspective, emphasis ought to be laid on gathering and structuring expert performance and knowledge to create effective instructions, which could be delivered using any AR medium suited to the task and work environment.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"10 28","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design Goals for End-User Development of Robot-Assisted Physical Training Activities: A Participatory Design Study 最终用户开发机器人辅助体能训练活动的设计目标:参与式设计研究
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2024-06-17 DOI: 10.1145/3664632
Jose Pablo De la Rosa Gutierrez, Thiago Rocha Silva, Yvonne Dittrich, Anders Stengaard S⊘rensen
{"title":"Design Goals for End-User Development of Robot-Assisted Physical Training Activities: A Participatory Design Study","authors":"Jose Pablo De la Rosa Gutierrez, Thiago Rocha Silva, Yvonne Dittrich, Anders Stengaard S⊘rensen","doi":"10.1145/3664632","DOIUrl":"https://doi.org/10.1145/3664632","url":null,"abstract":"Programming robots presents significant challenges, including high costs, extensive time commitments and steep learning curves, particularly for individuals lacking technical background in engineering. These barriers have been partially mitigated by the emergence of end-user development methodologies. Yet existing approaches often fall short in equipping users with the necessary software engineering competencies to develop comprehensive robot behaviors or to effectively maintain and re-purpose their creations. In this paper, we introduce a novel end-user development approach designed to empower physical therapists to independently specify robot-assisted physical training exercises, eliminating the need for robotics experts' intervention. Our approach is based on a set of design goals obtained through a participatory design study with experts in the field. It utilizes a textual domain-specific language (DSL) that enables users to define expected robot behaviors through Behaviour-Driven Development (BDD) scenarios. This paper discusses key themes, design objectives, and the evolution of requirements that emerged from an evaluative workshop.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"2 8","pages":"1 - 31"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supporting Mixed-Presence Awareness across Wall-Sized Displays Using a Tracking Pipeline based on Depth Cameras 利用基于深度摄像头的跟踪管道支持跨墙面显示屏的混合存在感知
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2024-06-17 DOI: 10.1145/3664634
Adrien Coppens, J. Hermen, Lou Schwartz, Christian Moll, Valérie Maquil
{"title":"Supporting Mixed-Presence Awareness across Wall-Sized Displays Using a Tracking Pipeline based on Depth Cameras","authors":"Adrien Coppens, J. Hermen, Lou Schwartz, Christian Moll, Valérie Maquil","doi":"10.1145/3664634","DOIUrl":"https://doi.org/10.1145/3664634","url":null,"abstract":"One of the main benefits of large interactive surfaces (e.g. wall-sized displays) lies in their support for collocated collaboration by facilitating simultaneous interactions with the displays and high awareness of other group members' actions. In the context of remote collaboration, this awareness information needs to be acquired through digital means such as video feeds, which typically offer very limited information on non-verbal communication aspects, including on workspace awareness. We describe a new approach we have implemented to tackle that challenge through a multimodal pipeline that deals with tracking, attributing, transmitting, and visualising non-verbal information through what we refer to as workspace awareness cues, across wall-sized displays placed at distant locations. Our approach relies on commodity depth cameras combined with screen configuration information to generate deictic cues such as pointing targets and gaze direction. It also leverages recent artificial intelligence breakthroughs to attribute such cues to identified individuals and augment them with additional gestural interactions. In the present paper, we expand on the details and rationale behind our approach, describe its technical implementation, validate its novelty with regards to the existing literature, and report on early but promising results from an evaluation we conducted based on a mixed-presence decision-making scenario across two distant wall-sized displays.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"13 17","pages":"1 - 32"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
All in One Place: Ensuring Usable Access to Online Shopping Items for Blind Users 一网打尽:确保盲人用户可以访问网上购物项目
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2024-06-17 DOI: 10.1145/3664639
Y. Prakash, Akshay Kolgar Nayak, Mohan Sunkara, Sampath Jayarathna, H. Lee, V. Ashok
{"title":"All in One Place: Ensuring Usable Access to Online Shopping Items for Blind Users","authors":"Y. Prakash, Akshay Kolgar Nayak, Mohan Sunkara, Sampath Jayarathna, H. Lee, V. Ashok","doi":"10.1145/3664639","DOIUrl":"https://doi.org/10.1145/3664639","url":null,"abstract":"Perusing web data items such as shopping products is a core online user activity. To prevent information overload, the content associated with data items is typically dispersed across multiple webpage sections over multiple web pages. However, such content distribution manifests an unintended side effect of significantly increasing the interaction burden for blind users, since navigating to-and-fro between different sections in different pages is tedious and cumbersome with their screen readers. While existing works have proposed methods for the context of a single webpage, solutions enabling usable access to content distributed across multiple webpages are few and far between. In this paper, we present InstaFetch, a browser extension that dynamically generates an alternative screen reader-friendly user interface in real-time, which blind users can leverage to almost instantly access different item-related information such as description, full specification, and user reviews, all in one place, without having to tediously navigate to different sections in different webpages. Moreover, InstaFetch also supports natural language queries about any item, a feature blind users can exploit to quickly obtain desired information, thereby avoiding manually trudging through reams of text. In a study with 14 blind users, we observed that the participants needed significantly lesser time to peruse data items with InstaFetch, than with a state-of-the-art solution.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"7 31","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MARLUI: Multi-Agent Reinforcement Learning for Adaptive Point-and-Click UIs MARLUI:自适应点击式用户界面的多代理强化学习
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2024-06-17 DOI: 10.1145/3661147
Thomas Langerak, Eth Zürich, Switzerland, Christoph Gebhardt, Christian Holz, Sammy Christen, Mert Albaba
{"title":"MARLUI: Multi-Agent Reinforcement Learning for Adaptive Point-and-Click UIs","authors":"Thomas Langerak, Eth Zürich, Switzerland, Christoph Gebhardt, Christian Holz, Sammy Christen, Mert Albaba","doi":"10.1145/3661147","DOIUrl":"https://doi.org/10.1145/3661147","url":null,"abstract":"As the number of selectable items increases, point-and-click interfaces rapidly become complex, leading to a decrease in usability. Adaptive user interfaces can reduce this complexity by automatically adjusting an interface to only display the most relevant items. A core challenge for developing adaptive interfaces is to infer user intent and chose adaptations accordingly. Current methods rely on tediously hand-crafted rules or carefully collected user data. Furthermore, heuristics need to be recrafted and data regathered for every new task and interface. To address this issue, we formulate interface adaptation as a multi-agent reinforcement learning problem. Our approach learns adaptation policies without relying on heuristics or real user data, facilitating the development of adaptive interfaces across various tasks with minimal adjustments needed. In our formulation, a user agent mimics a real user and learns to interact with an interface via point-and-click actions. Simultaneously, an interface agent learns interface adaptations, to maximize the user agent's efficiency, by observing the user agent's behavior. For our evaluation, we substituted the simulated user agent with actual users. Our study involved twelve participants and concentrated on automatic toolbar item assignment. The results show that the policies we developed in simulation effectively apply to real users. These users were able to complete tasks with fewer actions and in similar times compared to methods trained with real data. Additionally, we demonstrated our method's efficiency and generalizability across four different interfaces and tasks.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"1 2","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive Analytics: The Influence of Flow, Sense of Agency, and Presence on Performance and Satisfaction 身临其境的分析:流程、代理感和存在感对绩效和满意度的影响
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2024-06-17 DOI: 10.1145/3661144
Jan P. Gründling, Benjamin Weyers
{"title":"Immersive Analytics: The Influence of Flow, Sense of Agency, and Presence on Performance and Satisfaction","authors":"Jan P. Gründling, Benjamin Weyers","doi":"10.1145/3661144","DOIUrl":"https://doi.org/10.1145/3661144","url":null,"abstract":"Performance and user satisfaction are key quality indicators in immersive analytics. Performance in terms of, e.g., error rate can only be measured if a ground truth is known to evaluate whether a user action or analysis result is erroneous. Thus, an estimate measure is needed to indicate the user's performance without having a ground truth available. This work investigates flow, sense of agency and presence as possible candidates in two experiments. First, these candidates are tested for their predictive value for task performance and satisfaction without task-related semantics to reduce bias from low task-interaction congruence. After the first experiment showed that only flow predicts performance and satisfaction, the second experiment tests flow, performance, and satisfaction in a realistic analytics scenario to improve external validity. The results suggest that flow experience might be a promising estimate for performance and satisfaction and thus quality of the immersive analytics tool.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"4 4","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Usability Evaluation of Transitional Cross-Reality Interfaces 过渡性跨现实界面的开发与可用性评估
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2024-06-17 DOI: 10.1145/3664637
Leonard Schmidt, Enes Yigitbas
{"title":"Development and Usability Evaluation of Transitional Cross-Reality Interfaces","authors":"Leonard Schmidt, Enes Yigitbas","doi":"10.1145/3664637","DOIUrl":"https://doi.org/10.1145/3664637","url":null,"abstract":"The concept of Transitional Cross-Reality Interfaces where a user can seamlessly transition across the Reality-Virtuality-Continuum dates back to the introduction of the Magic Book in 2001 but recently gained new research momentum since head-mounted displays advanced by combining the former distinctive concepts of augmented reality and virtual reality into a single device. New technological capabilities require new ways of developing applications to satisfy user requirements. Especially in the context of operating in multiple realities, developing AR/VR applications is not a trivial task. In this context, the development of Transitional Cross-Reality Interfaces remains a complex engineering problem that requires specific methods, concepts, and tools. To address this problem and provide a systematic development approach, we propose a conceptual framework for both developing and evaluating Transitional Cross-Reality Interfaces. To evaluate our solution, we implemented a Transitional Cross-Reality Interface based on our framework and evaluated it with both the measurements provided by the framework and additional questionnaires. We found high usability and interesting transitional behavior of users indicating the usefulness of the proposed framework as an underlying software architecture for Transitional Cross-Reality Interfaces.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"2 8","pages":"1 - 32"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PACMHCI - Engineering Interactive Computing Systems, June 2024: Editorial Introduction PACMHCI - 工程交互式计算系统,2024 年 6 月:编辑介绍
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2024-06-17 DOI: 10.1145/3664470
Carmen Santoro, Anke Dittmar
{"title":"PACMHCI - Engineering Interactive Computing Systems, June 2024: Editorial Introduction","authors":"Carmen Santoro, Anke Dittmar","doi":"10.1145/3664470","DOIUrl":"https://doi.org/10.1145/3664470","url":null,"abstract":"Welcome to this issue of the Proceedings of the ACM on Human-Computer Interaction, bringing together contributions from the community on Engineering Interactive Computing Systems (EICS). The EICS track of the PACM-HCI is the primary venue for research contributions at the intersection of Human-Computer Interaction (HCI) and Software Engineering.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"13 11","pages":"1 - 1"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141334993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Significant Productivity Gains through Programming with Large Language Models 使用大型语言模型编程可显著提高生产率
Proceedings of the ACM on Human-Computer Interaction Pub Date : 2024-06-17 DOI: 10.1145/3661145
Thomas Weber, Maximilian Brandmaier, Albrecht Schmidt, Sven Mayer
{"title":"Significant Productivity Gains through Programming with Large Language Models","authors":"Thomas Weber, Maximilian Brandmaier, Albrecht Schmidt, Sven Mayer","doi":"10.1145/3661145","DOIUrl":"https://doi.org/10.1145/3661145","url":null,"abstract":"Large language models like GPT and Codex drastically alter many daily tasks, including programming, where they can rapidly generate code from natural language or informal specifications. Thus, they will change what it means to be a programmer and how programmers act during software development. This work explores how AI assistance for code generation impacts productivity. In our user study (N=24), we asked programmers to complete Python programming tasks supported by a) an auto-complete interface using GitHub Copilot, b) a conversational system using GPT-3, and c) traditionally with just the web browser. Aside from significantly increasing productivity metrics, participants displayed distinctive usage patterns and strategies, highlighting that the form of presentation and interaction affects how users engage with these systems. Our findings emphasize the benefits of AI-assisted coding and highlight the different design challenges for these systems.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"3 8","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信