Richard Li, Philip Vutien, Sabrina Omer, Michael Yacoub, George Ioannou, Ravi Karkar, Sean A Munson, James Fogarty
{"title":"Deploying and Examining Beacon for At-Home Patient Self-Monitoring with Critical Flicker Frequency.","authors":"Richard Li, Philip Vutien, Sabrina Omer, Michael Yacoub, George Ioannou, Ravi Karkar, Sean A Munson, James Fogarty","doi":"10.1145/3706598.3714240","DOIUrl":"10.1145/3706598.3714240","url":null,"abstract":"<p><p>Chronic liver disease can lead to neurological conditions that result in coma or death. Although early detection can allow for intervention, testing is infrequent and unstandardized. Beacon is a device for at-home patient self-measurement of cognitive function via critical flicker frequency, which is the frequency at which a flickering light appears steady to an observer. This paper presents our efforts in iterating on Beacon's hardware and software to enable at-home use, then reports on an at-home deployment with 21 patients taking measurements over 6 weeks. We found that measurements were stable despite being taken at different times and in different environments. Finally, through interviews with 15 patients and 5 hepatologists, we report on participant experiences with Beacon, preferences around how CFF data should be presented, and the role of caregivers in helping patients manage their condition. Informed by our experiences with Beacon, we further discuss design implications for home health devices.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12165253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel A Adler, Yuewen Yang, Thalia Viranda, Anna R Van Meter, Emma Elizabeth McGinty, Tanzeem Choudhury
{"title":"Designing Technologies for Value-based Mental Healthcare: Centering Clinicians' Perspectives on Outcomes Data Specification, Collection, and Use.","authors":"Daniel A Adler, Yuewen Yang, Thalia Viranda, Anna R Van Meter, Emma Elizabeth McGinty, Tanzeem Choudhury","doi":"10.1145/3706598.3713481","DOIUrl":"10.1145/3706598.3713481","url":null,"abstract":"<p><p>Health information technologies are transforming how mental healthcare is paid for through value-based care programs, which tie payment to data quantifying care outcomes. But, it is unclear what outcomes data these technologies should store, how to engage users in data collection, and how outcomes data can improve care. Given these challenges, we conducted interviews with 30 U.S.-based mental health clinicians to explore the design space of health information technologies that support outcomes data specification, collection, and use in value-based mental healthcare. Our findings center clinicians' perspectives on aligning outcomes data for payment programs and care; opportunities for health technologies and personal devices to improve data collection; and considerations for using outcomes data to hold stakeholders including clinicians, health insurers, and social services financially accountable in value-based mental healthcare. We conclude with implications for future research designing and developing technologies supporting value-based care across stakeholders involved with mental health service delivery.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12218218/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144556119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingyi Xie, Rui Yu, H E Zhang, Syed Masum Billah, Sooyeon Lee, John M Carroll
{"title":"Beyond Visual Perception: Insights from Smartphone Interaction of Visually Impaired Users with Large Multimodal Models.","authors":"Jingyi Xie, Rui Yu, H E Zhang, Syed Masum Billah, Sooyeon Lee, John M Carroll","doi":"10.1145/3706598.3714210","DOIUrl":"10.1145/3706598.3714210","url":null,"abstract":"<p><p>Large multimodal models (LMMs) have enabled new AI-powered applications that help people with visual impairments (PVI) receive natural language descriptions of their surroundings through audible text. We investigated how this emerging paradigm of visual assistance transforms how PVI perform and manage their daily tasks. Moving beyond basic usability assessments, we examined both the capabilities and limitations of LMM-based tools in personal and social contexts, while exploring design implications for their future development. Through interviews with 14 visually impaired users and analysis of image descriptions from both participants and social media using Be My AI (an LMM-based application), we identified two key limitations. First, these systems' context awareness suffers from hallucinations and misinterpretations of social contexts, styles, and human identities. Second, their intent-oriented capabilities often fail to grasp and act on users' intentions. Based on these findings, we propose design strategies for improving both human-AI and AI-AI interactions, contributing to the development of more effective, interactive, and personalized assistive technologies.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"25 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12338113/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VideoA11y: Method and Dataset for Accessible Video Description.","authors":"Chaoyu Li, Sid Padmanabhuni, Maryam S Cheema, Hasti Seifi, Pooyan Fazli","doi":"10.1145/3706598.3714096","DOIUrl":"10.1145/3706598.3714096","url":null,"abstract":"<p><p>Video descriptions are crucial for blind and low vision (BLV) users to access visual content. However, current artificial intelligence models for generating descriptions often fall short due to limitations in the quality of human annotations within training datasets, resulting in descriptions that do not fully meet BLV users' needs. To address this gap, we introduce VideoA11y, an approach that leverages multimodal large language models (MLLMs) and video accessibility guidelines to generate descriptions tailored for BLV individuals. Using this method, we have curated VideoA11y-40K, the largest and most comprehensive dataset of 40,000 videos described for BLV users. Rigorous experiments across 15 video categories, involving 347 sighted participants, 40 BLV participants, and seven professional describers, showed that VideoA11y descriptions outperform novice human annotations and are comparable to trained human annotations in clarity, accuracy, objectivity, descriptiveness, and user satisfaction. We evaluated models on VideoA11y-40K using both standard and custom metrics, demonstrating that MLLMs fine-tuned on this dataset produce high-quality accessible descriptions. Code and dataset are available at https://people-robots.github.io/VideoA11y/.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398407/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eleanor R Burgess, David C Mohr, Sean A Munson, Madhu C Reddy
{"title":"What's In Your Kit? Mental Health Technology Kits for Depression Self-Management.","authors":"Eleanor R Burgess, David C Mohr, Sean A Munson, Madhu C Reddy","doi":"10.1145/3706598.3713585","DOIUrl":"10.1145/3706598.3713585","url":null,"abstract":"<p><p>This paper characterizes the mental health technology \"kits\" of individuals managing depression: the specific technologies on their digital devices and physical items in their environments that people turn to as part of their mental health management. We interviewed 28 individuals living across the United States who use bundles of connected tools for both individual and collaborative mental health activities. We contribute to the HCI community by conceptualizing these tool assemblages that people managing depression have constructed over time. We detail categories of tools, describe kit characteristics (intentional, adaptable, available), and present participant ideas for future mental health support technologies. We then discuss what a mental health technology kit perspective means for researchers and designers and describe design principles (building within current toolkits; creating new tools from current self-management strategies; and identifying gaps in people's current kits) to support depression self-management across an evolving set of tools.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12118807/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amira Skeggs, Ashish Mehta, Valerie Yap, Seray B Ibrahim, Charla Rhodes, James J Gross, Sean A Munson, Predrag Klasnja, Amy Orben, Petr Slovak
{"title":"Micro-narratives: A Scalable Method for Eliciting Stories of People's Lived Experience.","authors":"Amira Skeggs, Ashish Mehta, Valerie Yap, Seray B Ibrahim, Charla Rhodes, James J Gross, Sean A Munson, Predrag Klasnja, Amy Orben, Petr Slovak","doi":"10.1145/3706598.3713999","DOIUrl":"10.1145/3706598.3713999","url":null,"abstract":"<p><p>Engaging with people's lived experiences is foundational for HCI research and design. This paper introduces a novel narrative elicitation method to empower people to easily articulate 'micro-narratives' emerging from their lived experiences, irrespective of their writing ability or background. Our approach aims to enable at-scale collection of rich, co-created datasets that highlight target populations' voices with minimal participant burden, while precisely addressing specific research questions. To pilot this idea, and test its feasibility, we: (i) developed an AI-powered prototype, which leverages LLM-chaining to scaffold the cognitive steps necessary for users' narrative articulation; (ii) deployed it in three mixed-methods studies involving over 380 users; and (iii) consulted with established academics as well as C-level staff at (inter)national non-profits to map out potential applications. Both qualitative and quantitative findings show the acceptability and promise of the micro-narrative method, while also identifying the ethical and safeguarding considerations necessary for any at-scale deployments.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12265993/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VisiMark: Characterizing and Augmenting Landmarks for People with Low Vision in Augmented Reality to Support Indoor Navigation.","authors":"Ruijia Chen, Junru Jiang, Pragati Maheshwary, Brianna R Cochran, Yuhang Zhao","doi":"10.1145/3706598.3713847","DOIUrl":"10.1145/3706598.3713847","url":null,"abstract":"<p><p>Landmarks are critical in navigation, supporting self-orientation and mental model development. Similar to sighted people, people with low vision (PLV) frequently look for landmarks via visual cues but face difficulties identifying some important landmarks due to vision loss. We first conducted a formative study with six PLV to characterize their challenges and strategies in landmark selection, identifying their unique landmark categories (e.g., area silhouettes, accessibility-related objects) and preferred landmark augmentations. We then designed <i>VisiMark</i>, an AR interface that supports landmark perception for PLV by providing both overviews of space structures and in-situ landmark augmentations. We evaluated VisiMark with 16 PLV and found that VisiMark enabled PLV to perceive landmarks they preferred but could not easily perceive before, and changed PLV's landmark selection from only visually-salient objects to cognitive landmarks that are more important and meaningful. We further derive design considerations for AR-based landmark augmentation systems for PLV.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12269830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrienne Pichon, Jessica R Blumberg, Lena Mamykina, Noémie Elhadad
{"title":"The Voice of Endo: Leveraging Speech for an Intelligent System That Can Forecast Illness Flare-ups.","authors":"Adrienne Pichon, Jessica R Blumberg, Lena Mamykina, Noémie Elhadad","doi":"10.1145/3706598.3714040","DOIUrl":"10.1145/3706598.3714040","url":null,"abstract":"<p><p>Managing complex chronic illness is challenging due to its unpredictability. This paper explores the potential of voice for automated flare-up forecasts. We conducted a six-week speculative design study with individuals with endometriosis, tasking participants to submit daily voice recordings and symptom logs. Through focus groups, we elicited their experiences with voice capture and perceptions of its usefulness in forecasting flare-ups. Participants were enthusiastic and intrigued at the potential of flare-up forecasts through the analysis of their voice. They highlighted imagined benefits from the experience of recording in supporting emotional aspects of illness and validating both day-to-day and overall illness experiences. Participants reported that their recordings revolved around their endometriosis, suggesting that the recordings' content could further inform forecasting. We discuss potential opportunities and challenges in leveraging the voice as a data modality in human-centered AI tools that support individuals with complex chronic conditions.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12439622/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aaleyah Lewis, Jesse J Martinez, Maitraye Das, James Fogarty
{"title":"Inaccessible and Deceptive: Examining Experiences of Deceptive Design with People Who Use Visual Accessibility Technology.","authors":"Aaleyah Lewis, Jesse J Martinez, Maitraye Das, James Fogarty","doi":"10.1145/3706598.3713784","DOIUrl":"10.1145/3706598.3713784","url":null,"abstract":"<p><p>Deceptive design patterns manipulate people into actions to which they would otherwise object. Despite growing research on deceptive design patterns, limited research examines their interplay with accessibility and visual accessibility technology (e.g., screen readers, screen magnification, braille displays). We present an interview and diary study with 16 people who use visual accessibility technology to better understand experiences with accessibility and deceptive design. We report participant experiences with six deceptive design patterns, including designs that are intentionally deceptive and designs where participants describe accessibility barriers unintentionally manifesting as deceptive, together with direct and indirect consequences of deceptive patterns. We discuss intent versus impact in accessibility and deceptive design, how access barriers exacerbate harms of deceptive design patterns, and impacts of deceptive design from a perspective of consequence-based accessibility. We propose that accessibility tools could help address deceptive design patterns by offering higher-level feedback to well-intentioned designers.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12188898/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cultivating Computational Thinking and Social Play among Neurodiverse Preschoolers in Inclusive Classrooms.","authors":"Maitraye Das, Megan Tran, Amanda Chih-Han Ong, Julie A Kientz, Heather Feldner","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Computational thinking (CT) is regarded as a fundamental twenty-first century skill and has been implemented in many early childhood education curriculum. Yet, the needs of neurodivergent children have remained largely overlooked in the extensive research and technologies built to foster CT among children. To address this, we investigated how to support neurodiverse (i.e., groups involving neurodivergent and neurotypical) preschoolers aged 3-5 in learning CT concepts. Grounded in interviews with six teachers, we deployed an age-appropriate, programmable robot called KIBO in two preschool classrooms involving 12 neurodivergent and 17 neurotypical children for eight weeks. Using interaction analysis, we illustrate how neurodivergent children found enjoyment in assembling KIBO and learned to code with it while engaging in cooperative and competitive play with neurotypical peers and the adults. Through this, we discuss accessible adaptations needed to enhance CT among neurodivergent preschoolers and ways to reimagine technology-mediated social play for them.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":"1-22"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12188882/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}