Andrea Bianchi, Zhi Lin Yap, Punn Lertjaturaphat, Austin Z. Henley, K. Moon, Yoonji Kim
{"title":"Inline Visualization and Manipulation of Real-Time Hardware Log for Supporting Debugging of Embedded Programs","authors":"Andrea Bianchi, Zhi Lin Yap, Punn Lertjaturaphat, Austin Z. Henley, K. Moon, Yoonji Kim","doi":"10.1145/3660250","DOIUrl":"https://doi.org/10.1145/3660250","url":null,"abstract":"The advent of user-friendly embedded prototyping systems, exemplified by platforms like Arduino, has significantly democratized the creation of interactive devices that combine software programs with electronic hardware. This interconnection between hardware and software, however, makes the identification of bugs very difficult, as problems could be rooted in the program, in the circuit, or at their intersection. While there are tools to assist in identifying and resolving bugs, they typically require hardware instrumentation or visualizing logs in serial monitors. Based on the findings of a formative study, we designed Inline a programming tool that simplifies debugging of embedded systems by making explicit the internal state of the hardware and the program's execution flow using visualizations of the hardware logs directly within the user's code. The system's key characteristics are 1) an inline presentation of logs within the code, 2) real-time tracking of the execution flow, and 3) an expression language to manipulate and filter the logs. The paper presents the detailed implementation of the system and a study with twelve users, which demonstrates what features were adopted and how they were leveraged to complete debugging tasks.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"11 37","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TARPS: A Toolbox for Enhancing Privacy and Security for Collaborative AR","authors":"S. Krings, Enes Yigitbas","doi":"10.1145/3660251","DOIUrl":"https://doi.org/10.1145/3660251","url":null,"abstract":"Modern AR applications collect a wide range of data to leverage context-specific functionalities. This includes data that might be private or security-critical (e.g., the camera view of a private home), calling for protective measures, especially in collaborative settings where data is inherently shared. A literature research revealed a lack of development support for privacy and security in collaborative AR. This makes it difficult for developers to find the time and resources to include protection mechanisms, leading to very limited options for end-users to control what data about them is shared. To address this problem, we present TARPS, a development Toolbox for enhancing collaborative AR applications with Privacy and Security protection mechanisms. TARPS is an out-of-the-box solution to add protection features to collaborative AR applications in a configurable manner. In developer interviews, the idea of TARPS was well received and an end-user study with an application created using TARPS showed that the included protection features were usable and accepted by end-users.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"3 12","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco Emporio, A. Caputo, Deborah Pintani, Dong Seon Cheng, Thomas De Marchi, Gianmaria Forte, Franco Fummi, A. Giachetti
{"title":"Integration of Extended Reality with a Cyber-Physical Factory Environment and its Digital Twins","authors":"Marco Emporio, A. Caputo, Deborah Pintani, Dong Seon Cheng, Thomas De Marchi, Gianmaria Forte, Franco Fummi, A. Giachetti","doi":"10.1145/3660246","DOIUrl":"https://doi.org/10.1145/3660246","url":null,"abstract":"In this paper, we present an example of complete integration of eXtended Reality technologies within a demonstration laboratory showcasing Industry 4.0/5.0 compliant machinery in realistic scenarios of use. We describe the design choices and the implementation of the augmented and virtual reality applications developed and potentially usable to support different real-world tasks, featuring advanced gesture-based interaction modes. We also describe the optimized communication architecture used to synchronize data between the cyber-physical factory environment with all its components, its industrial digital twin, and the augmented and virtual replica of the factory. Example tasks supported with the tools in public demonstrations allow users wearing Microsoft HoloLens 2 or Meta Quest 2 headsets to monitor the status of the prototype production line and operate on it, locally or remotely. An example video showing the applications is available in the supplementary material.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"4 2","pages":"1 - 13"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Type System for Flexible User Interactions Handling","authors":"Arnaud Blouin","doi":"10.1145/3660248","DOIUrl":"https://doi.org/10.1145/3660248","url":null,"abstract":"Engineering user interfaces involves the use of multiple user interactions. Developers may struggle with programming and using those user interactions because of a lack of flexibility that affects the current user interface programming approaches. First, developers may want to switch from one user interaction to another close one or combine multiple user interactions without changing much code. Second, developers may also want to use several user interactions to concisely produce the same user command. Third, developers may want to be warned about conflicts between involved user interactions. Currently, developers can hardly perform these first two cases without applying numerous code changes or producing boilerplate code. Regarding the third case, developers can only observe such issues during the execution of the interactive systems, which prolongs the detection time. To overcome these three issues this paper proposes a user interaction type system. This user interaction type system reifies user interactions as first-class concerns with typing facilities for enabling user interactions substitutability and union. It also allows the writing of type checking rules to check for possible issues related to user interactions at compile time. We implemented the type system within the TypeScript version of Interacto, a framework for processing user interactions. We evaluated the soundness and the expressiveness of our approach through several implemented use cases. This demonstrates the feasibility of the proposed approach and its ability to overcome the three mentioned issues.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"3 4","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of Code Generation for Simulating Participant Behavior in Experience Sampling Method by Iterative In-Context Learning of a Large Language Model","authors":"Alireza Khanshan, Pieter van Gorp, P. Markopoulos","doi":"10.1145/3661143","DOIUrl":"https://doi.org/10.1145/3661143","url":null,"abstract":"The Experience Sampling Method (ESM) is commonly used to understand behaviors, thoughts, and feelings in the wild by collecting self-reports. Sustaining sufficient response rates, especially in long-running studies remains challenging. To avoid low response rates and dropouts, experimenters rely on their experience, proposed methodologies from earlier studies, trial and error, or the scarcely available participant behavior data from previous ESM protocols. This approach often fails in finding the acceptable study parameters, resulting in redesigning the protocol and repeating the experiment. Research has shown the potential of machine learning to personalize ESM protocols such that ESM prompts are delivered at opportune moments, leading to higher response rates. The corresponding training process is hindered due to the scarcity of open data in the ESM domain, causing a cold start, which could be mitigated by simulating participant behavior. Such simulations provide training data and insights for the experimenters to update their study design choices. Creating this simulation requires behavioral science, psychology, and programming expertise. Large language models (LLMs) have emerged as facilitators for information inquiry and programming, albeit random and occasionally unreliable. We aspire to assess the readiness of LLMs in an ESM use case. We conducted research using GPT-3.5 turbo-16k to tackle an ESM simulation problem. We explored several prompt design alternatives to generate ESM simulation programs, evaluated the output code in terms of semantics and syntax, and interviewed ESM practitioners. We found that engineering LLM-enabled ESM simulations have the potential to facilitate data generation, but they perpetuate trust and reliability challenges.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"12 40","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Does the Medium Matter? A Comparison of Augmented Reality Media in Instructing Novices to Perform Complex, Skill-Based Manual Tasks.","authors":"H. Dhiman, Carsten Röcker","doi":"10.1145/3660249","DOIUrl":"https://doi.org/10.1145/3660249","url":null,"abstract":"Past research comparing augmented reality (AR) media such as in-situ projection and head-mounted devices (HMD) has usually considered simple manual activities. It is unknown whether previously reported differences between different AR media also apply to complex, skill-driven tasks. In this paper, we explore the feasibility and challenges in designing AR instructions for expertise-driven, skilled activities. We present findings from a real-world, between-subjects experiment in which novices were instructed to trim and bone sub-primal cuts of pork using two interactive AR prototypes, one utilizing in-situ projection and a second using the Hololens 2. The prototypes and instructions were designed in consultation with experts. We compared novices' task performance and subjective perceptions and gathered experts' feedback. Although both users and experts indicated a subjective preference for in-situ projection, results indicate that when tasks require knowledge, skill and expertise, the choice of the AR medium itself may not be consequential. Rather, in our experiment, the instruction quality influenced comprehension, knowledge retention and task performance. Hence, from an engineering perspective, emphasis ought to be laid on gathering and structuring expert performance and knowledge to create effective instructions, which could be delivered using any AR medium suited to the task and work environment.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"10 28","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jose Pablo De la Rosa Gutierrez, Thiago Rocha Silva, Yvonne Dittrich, Anders Stengaard S⊘rensen
{"title":"Design Goals for End-User Development of Robot-Assisted Physical Training Activities: A Participatory Design Study","authors":"Jose Pablo De la Rosa Gutierrez, Thiago Rocha Silva, Yvonne Dittrich, Anders Stengaard S⊘rensen","doi":"10.1145/3664632","DOIUrl":"https://doi.org/10.1145/3664632","url":null,"abstract":"Programming robots presents significant challenges, including high costs, extensive time commitments and steep learning curves, particularly for individuals lacking technical background in engineering. These barriers have been partially mitigated by the emergence of end-user development methodologies. Yet existing approaches often fall short in equipping users with the necessary software engineering competencies to develop comprehensive robot behaviors or to effectively maintain and re-purpose their creations. In this paper, we introduce a novel end-user development approach designed to empower physical therapists to independently specify robot-assisted physical training exercises, eliminating the need for robotics experts' intervention. Our approach is based on a set of design goals obtained through a participatory design study with experts in the field. It utilizes a textual domain-specific language (DSL) that enables users to define expected robot behaviors through Behaviour-Driven Development (BDD) scenarios. This paper discusses key themes, design objectives, and the evolution of requirements that emerged from an evaluative workshop.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"2 8","pages":"1 - 31"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrien Coppens, J. Hermen, Lou Schwartz, Christian Moll, Valérie Maquil
{"title":"Supporting Mixed-Presence Awareness across Wall-Sized Displays Using a Tracking Pipeline based on Depth Cameras","authors":"Adrien Coppens, J. Hermen, Lou Schwartz, Christian Moll, Valérie Maquil","doi":"10.1145/3664634","DOIUrl":"https://doi.org/10.1145/3664634","url":null,"abstract":"One of the main benefits of large interactive surfaces (e.g. wall-sized displays) lies in their support for collocated collaboration by facilitating simultaneous interactions with the displays and high awareness of other group members' actions. In the context of remote collaboration, this awareness information needs to be acquired through digital means such as video feeds, which typically offer very limited information on non-verbal communication aspects, including on workspace awareness. We describe a new approach we have implemented to tackle that challenge through a multimodal pipeline that deals with tracking, attributing, transmitting, and visualising non-verbal information through what we refer to as workspace awareness cues, across wall-sized displays placed at distant locations. Our approach relies on commodity depth cameras combined with screen configuration information to generate deictic cues such as pointing targets and gaze direction. It also leverages recent artificial intelligence breakthroughs to attribute such cues to identified individuals and augment them with additional gestural interactions. In the present paper, we expand on the details and rationale behind our approach, describe its technical implementation, validate its novelty with regards to the existing literature, and report on early but promising results from an evaluation we conducted based on a mixed-presence decision-making scenario across two distant wall-sized displays.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"13 17","pages":"1 - 32"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Prakash, Akshay Kolgar Nayak, Mohan Sunkara, Sampath Jayarathna, H. Lee, V. Ashok
{"title":"All in One Place: Ensuring Usable Access to Online Shopping Items for Blind Users","authors":"Y. Prakash, Akshay Kolgar Nayak, Mohan Sunkara, Sampath Jayarathna, H. Lee, V. Ashok","doi":"10.1145/3664639","DOIUrl":"https://doi.org/10.1145/3664639","url":null,"abstract":"Perusing web data items such as shopping products is a core online user activity. To prevent information overload, the content associated with data items is typically dispersed across multiple webpage sections over multiple web pages. However, such content distribution manifests an unintended side effect of significantly increasing the interaction burden for blind users, since navigating to-and-fro between different sections in different pages is tedious and cumbersome with their screen readers. While existing works have proposed methods for the context of a single webpage, solutions enabling usable access to content distributed across multiple webpages are few and far between. In this paper, we present InstaFetch, a browser extension that dynamically generates an alternative screen reader-friendly user interface in real-time, which blind users can leverage to almost instantly access different item-related information such as description, full specification, and user reviews, all in one place, without having to tediously navigate to different sections in different webpages. Moreover, InstaFetch also supports natural language queries about any item, a feature blind users can exploit to quickly obtain desired information, thereby avoiding manually trudging through reams of text. In a study with 14 blind users, we observed that the participants needed significantly lesser time to peruse data items with InstaFetch, than with a state-of-the-art solution.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"7 31","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Langerak, Eth Zürich, Switzerland, Christoph Gebhardt, Christian Holz, Sammy Christen, Mert Albaba
{"title":"MARLUI: Multi-Agent Reinforcement Learning for Adaptive Point-and-Click UIs","authors":"Thomas Langerak, Eth Zürich, Switzerland, Christoph Gebhardt, Christian Holz, Sammy Christen, Mert Albaba","doi":"10.1145/3661147","DOIUrl":"https://doi.org/10.1145/3661147","url":null,"abstract":"As the number of selectable items increases, point-and-click interfaces rapidly become complex, leading to a decrease in usability. Adaptive user interfaces can reduce this complexity by automatically adjusting an interface to only display the most relevant items. A core challenge for developing adaptive interfaces is to infer user intent and chose adaptations accordingly. Current methods rely on tediously hand-crafted rules or carefully collected user data. Furthermore, heuristics need to be recrafted and data regathered for every new task and interface. To address this issue, we formulate interface adaptation as a multi-agent reinforcement learning problem. Our approach learns adaptation policies without relying on heuristics or real user data, facilitating the development of adaptive interfaces across various tasks with minimal adjustments needed. In our formulation, a user agent mimics a real user and learns to interact with an interface via point-and-click actions. Simultaneously, an interface agent learns interface adaptations, to maximize the user agent's efficiency, by observing the user agent's behavior. For our evaluation, we substituted the simulated user agent with actual users. Our study involved twelve participants and concentrated on automatic toolbar item assignment. The results show that the policies we developed in simulation effectively apply to real users. These users were able to complete tasks with fewer actions and in similar times compared to methods trained with real data. Additionally, we demonstrated our method's efficiency and generalizability across four different interfaces and tasks.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"1 2","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}