Mitchell L. Gordon, Kaitlyn Zhou, Kayur Patel, Tatsunori B. Hashimoto, Michael S. Bernstein
{"title":"The Disagreement Deconvolution: Bringing Machine Learning Performance Metrics In Line With Reality","authors":"Mitchell L. Gordon, Kaitlyn Zhou, Kayur Patel, Tatsunori B. Hashimoto, Michael S. Bernstein","doi":"10.1145/3411764.3445423","DOIUrl":"https://doi.org/10.1145/3411764.3445423","url":null,"abstract":"Machine learning classifiers for human-facing tasks such as comment toxicity and misinformation often score highly on metrics such as ROC AUC but are received poorly in practice. Why this gap? Today, metrics such as ROC AUC, precision, and recall are used to measure technical performance; however, human-computer interaction observes that evaluation of human-facing systems should account for people’s reactions to the system. In this paper, we introduce a transformation that more closely aligns machine learning classification metrics with the values and methods of user-facing performance measures. The disagreement deconvolution takes in any multi-annotator (e.g., crowdsourced) dataset, disentangles stable opinions from noise by estimating intra-annotator consistency, and compares each test set prediction to the individual stable opinions from each annotator. Applying the disagreement deconvolution to existing social computing datasets, we find that current metrics dramatically overstate the performance of many human-facing machine learning tasks: for example, performance on a comment toxicity task is corrected from .95 to .73 ROC AUC.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"112 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89343242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LightTouch Gadgets: Extending Interactions on Capacitive Touchscreens by Converting Light Emission to Touch Inputs","authors":"Kaori Ikematsu, Kunihiro Kato, Y. Kawahara","doi":"10.1145/3411764.3445581","DOIUrl":"https://doi.org/10.1145/3411764.3445581","url":null,"abstract":"We present LightTouch, a 3D-printed passive gadget to enhance touch interactions on unmodified capacitive touchscreens. The LightTouch gadgets simulate finger operations such as tapping, swiping, and multi-touch gestures by means of conductive materials and light-dependent resistors (LDR) embedded in the object. The touchscreen emits visible light and the LDR senses the level of this light, which changes its resistance value. By controlling the screen brightness, it intentionally connects or disconnects the path between the GND and the touchscreen, thus allowing the touch inputs to be controlled. In contrast to conventional physical extensions for touchscreens, our technique requires neither continuous finger contact on the conductive part nor the use of batteries. As such, it opens up new possibilities for touchscreen interactions beyond the simple automation of touch inputs, such as establishing a communication channel between devices, enhancing the trackability of tangibles, and inter-application operations.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90022116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zahra Ashktorab, Casey Dugan, James M. Johnson, Qian Pan, Wei Zhang, Sadhana Kumaravel, Murray Campbell
{"title":"Effects of Communication Directionality and AI Agent Differences in Human-AI Interaction","authors":"Zahra Ashktorab, Casey Dugan, James M. Johnson, Qian Pan, Wei Zhang, Sadhana Kumaravel, Murray Campbell","doi":"10.1145/3411764.3445256","DOIUrl":"https://doi.org/10.1145/3411764.3445256","url":null,"abstract":"In Human-AI collaborative settings that are inherently interactive, direction of communication plays a role in how users perceive their AI partners. In an AI-driven cooperative game with partially observable information, players (be it the AI or the human player) require their actions to be interpreted accurately by the other player to yield a successful outcome. In this paper, we investigate social perceptions of AI agents with various directions of communication in a cooperative game setting. We measure subjective social perceptions (rapport, intelligence, and likeability) of participants towards their partners when participants believe they are playing with an AI or with a human and the nature of the communication (responsiveness and leading roles). We ran a large scale study on Mechanical Turk (n=199) of this collaborative game and find significant differences in gameplay outcome and social perception across different AI agents, different directions of communication and when the agent is perceived to be an AI/Human. We find that the bias against the AI that has been demonstrated in prior studies varies with the direction of the communication and with the AI agent.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87821086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seeing Beyond Expert Blind Spots: Online Learning Design for Scale and Quality","authors":"Xu Wang, C. Rosé, K. Koedinger","doi":"10.1145/3411764.3445045","DOIUrl":"https://doi.org/10.1145/3411764.3445045","url":null,"abstract":"Maximizing system scalability and quality are sometimes at odds. This work provides an example showing scalability and quality can be achieved at the same time in instructional design, contrary to what instructors may believe or expect. We situate our study in the education of HCI methods, and provide suggestions to improve active learning within the HCI education community. While designing learning and assessment activities, many instructors face the choice of using open-ended or close-ended activities. Close-ended activities such as multiple-choice questions (MCQs) enable automated feedback to students. However, a survey with 22 HCI professors revealed a belief that MCQs are less valuable than open-ended questions, and thus, using them entails making a quality sacrifice in order to achieve scalability. A study with 178 students produced no evidence to support the teacher belief. This paper indicates more promise than concern in using MCQs for scalable instruction and assessment in at least some HCI domains.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87945727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Woojin Kang, Intaek Jung, Daeho Lee, Jin-Hyuk Hong
{"title":"Styling Words: A Simple and Natural Way to Increase Variability in Training Data Collection for Gesture Recognition","authors":"Woojin Kang, Intaek Jung, Daeho Lee, Jin-Hyuk Hong","doi":"10.1145/3411764.3445457","DOIUrl":"https://doi.org/10.1145/3411764.3445457","url":null,"abstract":"Due to advances in deep learning, gestures have become a more common tool for human-computer interaction. When implementing a large amount of training data, deep learning models show remarkable performance in gesture recognition. Since it is expensive and time consuming to collect gesture data from people, we are often confronted with a practicality issue when managing the quantity and quality of training data. It is a well-known fact that increasing training data variability can help to improve the generalization performance of machine learning models. Thus, we directly intervene in the collection of gesture data to increase human gesture variability by adding some words (called styling words) into the data collection instructions, e.g., giving the instruction \"perform gesture #1 faster\" as opposed to \"perform gesture #1.\" Through an in-depth analysis of gesture features and video-based gesture recognition, we have confirmed the advantageous use of styling words in gesture training data collection.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"2014 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86592841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Srishti Palani, Zijian Ding, Austin Nguyen, Andrew Chuang, S. Macneil, Steven W. Dow
{"title":"CoNotate: Suggesting Queries Based on Notes Promotes Knowledge Discovery","authors":"Srishti Palani, Zijian Ding, Austin Nguyen, Andrew Chuang, S. Macneil, Steven W. Dow","doi":"10.1145/3411764.3445618","DOIUrl":"https://doi.org/10.1145/3411764.3445618","url":null,"abstract":"When exploring a new domain through web search, people often struggle to articulate queries because they lack domain-specific language and well-defined informational goals. Perhaps search tools rely too much on the query to understand what a searcher wants. Towards expanding this contextual understanding of a user during exploratory search, we introduce a novel system, CoNotate, which offers query suggestions based on analyzing the searcher’s notes and previous searches for patterns and gaps in information. To evaluate this approach, we conducted a within-subjects study where participants (n=38) conducted exploratory searches using a baseline system (standard web search) and the CoNotate system. The CoNotate approach helped searchers issue significantly more queries, and discover more terminology than standard web search. This work demonstrates how search can leverage user-generated content to help people get started when exploring complex, multi-faceted information spaces.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86799610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimization-based User Support for Cinematographic Quadrotor Camera Target Framing","authors":"Christoph Gebhardt, Otmar Hilliges","doi":"10.1145/3411764.3445568","DOIUrl":"https://doi.org/10.1145/3411764.3445568","url":null,"abstract":"To create aesthetically pleasing aerial footage, the correct framing of camera targets is crucial. However, current quadrotor camera tools do not consider the 3D extent of actual camera targets in their optimization schemes and simply interpolate between keyframes when generating a trajectory. This can yield videos with aesthetically unpleasing target framing. In this paper, we propose a target framing algorithm that optimizes the quadrotor camera pose such that targets are positioned at desirable screen locations according to videographic compositional rules and entirely visible throughout a shot. Camera targets are identified using a semi-automatic pipeline which leverages a deep-learning-based visual saliency model. A large-scale perceptual study (N ≈ 500) shows that our method enables users to produce shots with a target framing that is closer to what they intended to create and more or as aesthetically pleasing than with the previous state of the art.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86380340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating the Presence of Remote Mourners: a Case Study of Funeral Webcasting in Japan","authors":"Daisuke Uriu, Kenta Toshima, Minori Manabe, Takeru Yazaki, Takeshi Funatsu, Atsushi Izumihara, Zendai Kashino, Atsushi Hiyama, M. Inami","doi":"10.1145/3411764.3445617","DOIUrl":"https://doi.org/10.1145/3411764.3445617","url":null,"abstract":"Funerals are irreplaceable events, especially for bereaved family members and relatives. However, the COVID-19 pandemic has prevented many people worldwide from attending their loved ones’ funerals. The authors had the opportunity to assist one family faced with this predicament by webcasting and recording funeral rites held near Tokyo in June, 2020. Using our original 360-degree Telepresence system and smartphones running Zoom, we enabled the deceased’s elder siblings to remotely attend the funeral and did our utmost to make them feel present in the funeral hall. Despite the webcasting via Zoom contributing more to their remote attendances than our system, we discovered thoughtful findings which could be useful for designing remote funeral attendances. From the findings, we also discuss how HCI designers can contribute to this highly sensitive issue, weaving together knowledge from various domains including techno-spiritual practices, thanato-sensitive designs; and other religious and cultural aspects related to death rituals.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89024958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
João Marcelo Evangelista Belo, A. Feit, Tiare M. Feuchtner, Kaj Grønbæk
{"title":"XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces","authors":"João Marcelo Evangelista Belo, A. Feit, Tiare M. Feuchtner, Kaj Grønbæk","doi":"10.1145/3411764.3445349","DOIUrl":"https://doi.org/10.1145/3411764.3445349","url":null,"abstract":"Arm discomfort is a common issue in Cross Reality applications involving prolonged mid-air interaction. Solving this problem is difficult because of the lack of tools and guidelines for 3D user interface design. Therefore, we propose a method to make existing ergonomic metrics available to creators during design by estimating the interaction cost at each reachable position in the user’s environment. We present XRgonomics, a toolkit to visualize the interaction cost and make it available at runtime, allowing creators to identify UI positions that optimize users’ comfort. Two scenarios show how the toolkit can support 3D UI design and dynamic adaptation of UIs based on spatial constraints. We present results from a walkthrough demonstration, which highlight the potential of XRgonomics to make ergonomics metrics accessible during the design and development of 3D UIs. Finally, we discuss how the toolkit may address design goals beyond ergonomics.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83595838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gesture Knitter: A Hand Gesture Design Tool for Head-Mounted Mixed Reality Applications","authors":"George B. Mo, John J. Dudley, P. Kristensson","doi":"10.1145/3411764.3445766","DOIUrl":"https://doi.org/10.1145/3411764.3445766","url":null,"abstract":"Hand gestures are a natural and expressive input method enabled by modern mixed reality headsets. However, it remains challenging for developers to create custom gestures for their applications. Conventional strategies to bespoke gesture recognition involve either hand-crafting or data-intensive deep-learning. Neither approach is well suited for rapid prototyping of new interactions. This paper introduces a flexible and efficient alternative approach for constructing hand gestures. We present Gesture Knitter: a design tool for creating custom gesture recognizers with minimal training data. Gesture Knitter allows the specification of gesture primitives that can then be combined to create more complex gestures using a visual declarative script. Designers can build custom recognizers by declaring them from scratch or by providing a demonstration that is automatically decoded into its primitive components. Our developer study shows that Gesture Knitter achieves high recognition accuracy despite minimal training data and delivers an expressive and creative design experience.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79914741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}