Daye Kang, Tony Ho, Nicolai Marquardt, Bilge Mutlu, Andrea Bianchi
{"title":"ToonNote: Improving Communication in Computational Notebooks Using Interactive Data Comics","authors":"Daye Kang, Tony Ho, Nicolai Marquardt, Bilge Mutlu, Andrea Bianchi","doi":"10.1145/3411764.3445434","DOIUrl":"https://doi.org/10.1145/3411764.3445434","url":null,"abstract":"Computational notebooks help data analysts analyze and visualize datasets, and share analysis procedures and outputs. However, notebooks typically combine code (e.g., Python scripts), notes, and outputs (e.g., tables, graphs). The combination of disparate materials is known to hinder the comprehension of notebooks, making it difficult for analysts to collaborate with other analysts unfamiliar with the dataset. To mitigate this problem, we introduce ToonNote, a JupyterLab extension that enables the conversion of notebooks into “data comics.” ToonNote provides a simplified view of a Jupyter notebook, highlighting the most important results while supporting interactive and free exploration of the dataset. This paper presents the results of a formative study that motivated the system, its implementation, and an evaluation with 12 users, demonstrating the effectiveness of the produced comics. We discuss how our findings inform the future design of interfaces for computational notebooks and features to support diverse collaborators.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82304951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Rahman, M. R. Rahman, Nafis Irtiza Tripto, Mohammed Eunus Ali, S. H. Apon, Rifat Shahriyar
{"title":"AdolescentBot: Understanding Opportunities for Chatbots in Combating Adolescent Sexual and Reproductive Health Problems in Bangladesh","authors":"R. Rahman, M. R. Rahman, Nafis Irtiza Tripto, Mohammed Eunus Ali, S. H. Apon, Rifat Shahriyar","doi":"10.1145/3411764.3445694","DOIUrl":"https://doi.org/10.1145/3411764.3445694","url":null,"abstract":"Traditional face-to-face health consultation-based systems largely failed to attract teenagers to get reproductive and sexual health supports from doctors and practitioners in Bangladesh as ‘sex’ or ‘adolescent’ related issues are considered social taboos and are rarely discussed openly with anyone. This has damaging implications for the physiological and mental well-being of a large group of people. In this paper, we study chatbot’s effectiveness to assist adolescents in seeking reproductive and sexual health supports by analyzing the responses from 256 participants, including adolescents and medical personnel from six different regions of Bangladesh. We prototype an interactive chatbot, namely AdolescentBot, and analyzed users’ communication patterns, feelings, and contexts of use as the first point of support for getting adolescence related health advice. Our analysis finds that a chatbot can satisfy most of the users’ queries, and the majority of the queries are associated with wrong-beliefs. Finally, we discuss ethical and societal issues with chatbot usage and recommend a set of design propositions for the AdolescentBot.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82488803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazuyuki Fujita, Aoi Suzuki, Kazuki Takashima, Kaori Ikematsu, Y. Kitamura
{"title":"TiltChair: Manipulative Posture Guidance by Actively Inclining the Seat of an Office Chair","authors":"Kazuyuki Fujita, Aoi Suzuki, Kazuki Takashima, Kaori Ikematsu, Y. Kitamura","doi":"10.1145/3411764.3445151","DOIUrl":"https://doi.org/10.1145/3411764.3445151","url":null,"abstract":"We propose TiltChair, an actuated office chair that physically manipulates the user’s posture by actively inclining the chair’s seat to address problems associated with prolonged sitting. The system controls the inclination angle and motion speed with the aim of achieving manipulative but unobtrusive posture guidance. To demonstrate its potential, we first built a prototype of TiltChair with a seat that could be tilted by pneumatic control. We then investigated the effects of the seat’s inclination angle and motions on task performance and overall sitting experience through two experiments. The results show that the inclination angle mainly affects the difficulty of maintaining one’s posture, while the motion speed affected the conspicuousness and subjective acceptability of the motion. However, these seating conditions did not affect objective task performance. Based on these results, we propose a design space for facilitating effective seat-inclination behavior using the three dimensions of angle, speed, and continuity. Furthermore, we discuss promising applications.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76346100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shunichi Kasahara, Kazuma Takada, Jun Nishida, Kazuhisa Shibata, S. Shimojo, Pedro Lopes
{"title":"Preserving Agency During Electrical Muscle Stimulation Training Speeds up Reaction Time Directly After Removing EMS","authors":"Shunichi Kasahara, Kazuma Takada, Jun Nishida, Kazuhisa Shibata, S. Shimojo, Pedro Lopes","doi":"10.1145/3411764.3445147","DOIUrl":"https://doi.org/10.1145/3411764.3445147","url":null,"abstract":"Abstract: Force feedback devices, such as motor-based exoskeletons or wearables based on electrical muscle stimulation (EMS), have the unique potential to accelerate users’ own reaction time (RT). However, this speedup has only been explored while the device is attached to the user. In fact, very little is known regarding whether this faster reaction time still occurs after the user removes the device from their bodies–this is precisely what we investigated by means of a simple reaction time (RT) experiment, in which participants were asked to tap as soon as they saw an LED flashing. Participants experienced this in three EMS conditions: (1) fast-EMS, the electrical impulses were synced with the LED; (2) agency-EMS, the electrical impulse was delivered 40ms faster than the participant’s own RT, which prior work has shown to preserve one’s sense of agency over this movement; and, (3) late-EMS: the impulse was delivered after the participant’s own RT. Our results revealed that the participants’ RT was significantly reduced by approximately 8ms (up to 20ms) only after training with the agency-EMS condition. This finding suggests that the prioritizing agency during EMS training is key to motor-adaptation, i.e., it enables a faster motor response even after the user has removed the EMS device from their body.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76380747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karola Marky, Andreas Weiß, A. Matviienko, Florian Brandherm, Sebastian Wolf, Martin Schmitz, Florian Krell, Florian Müller, Max Mühlhäuser, T. Kosch
{"title":"Let’s Frets! Assisting Guitar Students During Practice via Capacitive Sensing","authors":"Karola Marky, Andreas Weiß, A. Matviienko, Florian Brandherm, Sebastian Wolf, Martin Schmitz, Florian Krell, Florian Müller, Max Mühlhäuser, T. Kosch","doi":"10.1145/3411764.3445595","DOIUrl":"https://doi.org/10.1145/3411764.3445595","url":null,"abstract":"Learning a musical instrument requires regular exercise. However, students are often on their own during their practice sessions due to the limited time with their teachers, which increases the likelihood of mislearning playing techniques. To address this issue, we present Let’s Frets - a modular guitar learning system that provides visual indicators and capturing of finger positions on a 3D-printed capacitive guitar fretboard. We based the design of Let’s Frets on requirements collected through in-depth interviews with professional guitarists and teachers. In a user study (N=24), we evaluated the feedback modules of Let’s Frets against fretboard charts. Our results show that visual indicators require the least time to realize new finger positions while a combination of visual indicators and position capturing yielded the highest playing accuracy. We conclude how Let’s Frets enables independent practice sessions that can be translated to other musical instruments.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76062591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alina Striner, Andrew M. Webb, Jessica Hammer, A. Cook
{"title":"Mapping Design Spaces for Audience Participation in Game Live Streaming","authors":"Alina Striner, Andrew M. Webb, Jessica Hammer, A. Cook","doi":"10.1145/3411764.3445511","DOIUrl":"https://doi.org/10.1145/3411764.3445511","url":null,"abstract":"Live streaming sites such as Twitch offer new ways for remote audiences to engage with and affect gameplay. While research has considered how audiences interact with games, HCI lacks clear demarcations of the potential design spaces for audience participation. This paper introduces and validates a theme map of audience participation in game live streaming for student designers. This map is a lens that reveals relationships among themes and sub-themes of Agency, Pacing, and Community—to explore, reflect upon, describe, and make sense of emerging, complex design spaces. We are the first to articulate such a lens, and to provide a reflective tool to support future research and education. To create the map, we perform a thematic analysis of design process documents of a course on audience participation for Twitch, using this analysis to visually coordinate relationships between important themes. To help student designers analyze and reflect on existing experiences, we supplement the theme map with a set of mapping procedures. We validate the applicability of our map with a second set of student designers, who found the map useful as a comparative and reflective tool.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87612817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marius Hoggenmüller, M. Tomitsch, L. Hespanhol, Tram Thi Minh Tran, Stewart Worrall, E. Nebot
{"title":"Context-Based Interface Prototyping: Understanding the Effect of Prototype Representation on User Feedback","authors":"Marius Hoggenmüller, M. Tomitsch, L. Hespanhol, Tram Thi Minh Tran, Stewart Worrall, E. Nebot","doi":"10.1145/3411764.3445159","DOIUrl":"https://doi.org/10.1145/3411764.3445159","url":null,"abstract":"The rise of autonomous systems in cities, such as automated vehicles (AVs), requires new approaches for prototyping and evaluating how people interact with those systems through context-based user interfaces, such as external human-machine interfaces (eHMIs). In this paper, we present a comparative study of three prototype representations (real-world VR, computer-generated VR, real-world video) of an eHMI in a mixed-methods study with 42 participants. Quantitative results show that while the real-world VR representation results in higher sense of presence, no significant differences in user experience and trust towards the AV itself were found. However, interview data shows that participants focused on different experiential and perceptual aspects in each of the prototype representations. These differences are linked to spatial awareness and perceived realism of the AV behaviour and its context, affecting in turn how participants assess trust and the eHMI. The paper offers guidelines for prototyping and evaluating context-based interfaces through simulations.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87711374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramin Hedeshy, C. Kumar, Raphael Menges, Steffen Staab
{"title":"Hummer: Text Entry by Gaze and Hum","authors":"Ramin Hedeshy, C. Kumar, Raphael Menges, Steffen Staab","doi":"10.1145/3411764.3445501","DOIUrl":"https://doi.org/10.1145/3411764.3445501","url":null,"abstract":"Text entry by gaze is a useful means of hands-free interaction that is applicable in settings where dictation suffers from poor voice recognition or where spoken words and sentences jeopardize privacy or confidentiality. However, text entry by gaze still shows inferior performance and it quickly exhausts its users. We introduce text entry by gaze and hum as a novel hands-free text entry. We review related literature to converge to word-level text entry by analysis of gaze paths that are temporally constrained by humming. We develop and evaluate two design choices: “HumHum” and “Hummer.” The first method requires short hums to indicate the start and end of a word. The second method interprets one continuous humming as an indication of the start and end of a word. In an experiment with 12 participants, Hummer achieved a commendable text entry rate of 20.45 words per minute, and outperformed HumHum and the gaze-only method EyeSwipe in both quantitative and qualitative measures.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87110055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Koch, Verena Tiefenbeck, Shu Liu, T. Berger, E. Fleisch, Felix Wortmann
{"title":"Taking Mental Health & Well-Being to the Streets: An Exploratory Evaluation of In-Vehicle Interventions in the Wild","authors":"Kevin Koch, Verena Tiefenbeck, Shu Liu, T. Berger, E. Fleisch, Felix Wortmann","doi":"10.1145/3411764.3446865","DOIUrl":"https://doi.org/10.1145/3411764.3446865","url":null,"abstract":"The increasing number of mental disorders worldwide calls for novel types of prevention measures. Given the number of commuters who spend a substantial amount of time on the road, the car offers an opportune environment. This paper presents the first in-vehicle intervention study affecting mental health and well-being on public roads. We designed and implemented two in-vehicle interventions based on proven psychotherapy interventions. Whereas the first intervention uses mindfulness exercises while driving, the second intervention induces positive emotions through music. Ten ordinary and healthy commuters completed 313 of these interventions on their daily drives over two months. We collected drivers’ immediate and post-driving feedback for each intervention and conducted interviews with the drivers after the end of the study. The results show that both interventions have improved drivers’ well-being. While the participants rated the music intervention very positively, the reception of the mindfulness intervention was more ambivalent.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87155258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rufat Rzayev, Polina Ugnivenko, Sarah Graf, V. Schwind, N. Henze
{"title":"Reading in VR: The Effect of Text Presentation Type and Location","authors":"Rufat Rzayev, Polina Ugnivenko, Sarah Graf, V. Schwind, N. Henze","doi":"10.1145/3411764.3445606","DOIUrl":"https://doi.org/10.1145/3411764.3445606","url":null,"abstract":"Reading is a fundamental activity to obtain information both in the real and the digital world. Virtual reality (VR) allows novel approaches for users to view, read, and interact with a text. However, for efficient reading, it is necessary to understand how a text should be displayed in VR without impairing the VR experience. Therefore, we conducted a study with 18 participants to investigate text presentation type and location in VR. We compared world-fixed, edge-fixed, and head-fixed text locations. Texts were displayed using Rapid Serial Visual Presentation (RSVP) or as a paragraph. We found that RSVP is a promising presentation type for reading short texts displayed in edge-fixed or head-fixed location in VR. The paragraph presentation type using world-fixed or edge-fixed location is promising for reading long text if movement in the virtual environment is not required. Insights from our study inform the design of reading interfaces for VR applications.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88050708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}