Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference最新文献

筛选
英文 中文
Exploring Collaboration Breakdowns Between Provider Teams and Patients in Post-Surgery Care. 探讨术后护理中提供者团队和患者之间的协作中断。
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference Pub Date : 2026-04-01 Epub Date: 2026-04-13 DOI: 10.1145/3772318.3791960
Bingsheng Yao, Menglin Zhao, Zhan Zhang, Pengqi Wang, Emma G Chester, Changchang Yin, Tianshi Li, Varun Mishra, Lace M Padilla, Odysseas P Chatzipanagiotou, Timothy Pawlik, Ping Zhang, Weidan Cao, Dakuo Wang
{"title":"Exploring Collaboration Breakdowns Between Provider Teams and Patients in Post-Surgery Care.","authors":"Bingsheng Yao, Menglin Zhao, Zhan Zhang, Pengqi Wang, Emma G Chester, Changchang Yin, Tianshi Li, Varun Mishra, Lace M Padilla, Odysseas P Chatzipanagiotou, Timothy Pawlik, Ping Zhang, Weidan Cao, Dakuo Wang","doi":"10.1145/3772318.3791960","DOIUrl":"https://doi.org/10.1145/3772318.3791960","url":null,"abstract":"<p><p>Post-surgery care involves ongoing collaboration between provider teams and patients, which starts from post-surgery hospitalization through home recovery after discharge. While prior HCI research has primarily examined patients' challenges at home, less is known about how provider teams coordinate discharge preparation and care handoffs, and how breakdowns in communication and care pathways may affect patient recovery. To investigate this gap, we conducted semi-structured interviews with 13 healthcare providers and 4 patients in the context of gastrointestinal (GI) surgery. We found coordination boundaries between in- and out-patient teams, coupled with complex organizational structures within teams, impeded the \"invisible work\" of preparing patients' home care plans and triaging patient information. For patients, these breakdowns resulted in inadequate preparation for home transition and fragmented self-collected data, both of which undermine timely clinical decision-making. Based on these findings, we outline design opportunities to formalize task ownership and handoffs, contextualize co-temporal signals, and align care plans with home resources.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2026 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13110784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing for Wayfinding in VR: Linking Navigation Interfaces to Spatial Learning and Cognitive Mapping. VR中的寻路设计:将导航界面与空间学习和认知地图联系起来。
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference Pub Date : 2026-04-01 Epub Date: 2026-04-13 DOI: 10.1145/3772318.3791145
Armin Mostafavi, Zhiwen Qiu, Tong Bill Xu, Wenqian Niu, Saleh Kalantari
{"title":"Designing for Wayfinding in VR: Linking Navigation Interfaces to Spatial Learning and Cognitive Mapping.","authors":"Armin Mostafavi, Zhiwen Qiu, Tong Bill Xu, Wenqian Niu, Saleh Kalantari","doi":"10.1145/3772318.3791145","DOIUrl":"10.1145/3772318.3791145","url":null,"abstract":"<p><p>Various virtual locomotion techniques and visual transition methods are used in VR-based navigation research, yet few studies have systematically examined their effects on spatial learning, cognitive map formation, and navigational performance in complex indoor environments. We conducted a between-subjects study (N=142) in two high-fidelity VR hospital contexts, including free exploration and task-based wayfinding, while treating locomotion and viewpoint transitions as experimental factors. Spatial learning was measured through pointing, distance estimation, and sketch-map accuracy; performance was measured through completion time and distance traveled; and experience was measured through cybersickness, perceived presence, and usability. Locomotion techniques affected task completion time, with teleportation associated with faster performance in the task-based context. Spatial learning effects were mixed, with patterns indicating that techniques without viewpoint transitions may better support cognitive mapping. Empirical insights and guidelines are provided to improve the reliability and real-world applicability of VR-based wayfinding research.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2026 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13148787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147846985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deploying and Examining Beacon for At-Home Patient Self-Monitoring with Critical Flicker Frequency. 利用临界闪烁频率部署和检测信标用于家庭患者自我监测。
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference Pub Date : 2025-05-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3714240
Richard Li, Philip Vutien, Sabrina Omer, Michael Yacoub, George Ioannou, Ravi Karkar, Sean A Munson, James Fogarty
{"title":"Deploying and Examining Beacon for At-Home Patient Self-Monitoring with Critical Flicker Frequency.","authors":"Richard Li, Philip Vutien, Sabrina Omer, Michael Yacoub, George Ioannou, Ravi Karkar, Sean A Munson, James Fogarty","doi":"10.1145/3706598.3714240","DOIUrl":"10.1145/3706598.3714240","url":null,"abstract":"<p><p>Chronic liver disease can lead to neurological conditions that result in coma or death. Although early detection can allow for intervention, testing is infrequent and unstandardized. Beacon is a device for at-home patient self-measurement of cognitive function via critical flicker frequency, which is the frequency at which a flickering light appears steady to an observer. This paper presents our efforts in iterating on Beacon's hardware and software to enable at-home use, then reports on an at-home deployment with 21 patients taking measurements over 6 weeks. We found that measurements were stable despite being taken at different times and in different environments. Finally, through interviews with 15 patients and 5 hepatologists, we report on participant experiences with Beacon, preferences around how CFF data should be presented, and the role of caregivers in helping patients manage their condition. Informed by our experiences with Beacon, we further discuss design implications for home health devices.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12165253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility and Utility of Multimodal Micro Ecological Momentary Assessment on a Smartwatch. 智能手表多模态微生态瞬间评估的可行性与实用性。
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3714086
Ha Le, Veronika Potter, Rithika Lakshminarayanan, Varun Mishra, Stephen Intille
{"title":"Feasibility and Utility of Multimodal Micro Ecological Momentary Assessment on a Smartwatch.","authors":"Ha Le, Veronika Potter, Rithika Lakshminarayanan, Varun Mishra, Stephen Intille","doi":"10.1145/3706598.3714086","DOIUrl":"10.1145/3706598.3714086","url":null,"abstract":"<p><p><i>μ</i>EMAs allow participants to answer a short survey quickly with a tap on a smartwatch screen or a brief speech input. The short interaction time and low cognitive burden enable researchers to collect self-reports at high frequency (once every 5-15 minutes) while maintaining participant engagement. Systems with single input modality, however, may carry different contextual biases that could affect compliance. We combined two input modalities to create a multimodal-<i>μ</i>EMA system, allowing participants to choose between speech or touch input to self-report. To investigate system usability, we conducted a seven-day field study where we asked 20 participants to label their posture and/or physical activity once every five minutes throughout their waking day. Despite the intense prompting interval, participants responded to 72.4% of the prompts. We found participants gravitated towards different modalities based on personal preferences and contextual states, highlighting the need to consider these factors when designing context-aware multimodal <i>μ</i>EMA systems.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12718675/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145812428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing Technologies for Value-based Mental Healthcare: Centering Clinicians' Perspectives on Outcomes Data Specification, Collection, and Use. 设计基于价值的精神卫生保健技术:以临床医生对结果数据规范、收集和使用的观点为中心。
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3713481
Daniel A Adler, Yuewen Yang, Thalia Viranda, Anna R Van Meter, Emma Elizabeth McGinty, Tanzeem Choudhury
{"title":"Designing Technologies for Value-based Mental Healthcare: Centering Clinicians' Perspectives on Outcomes Data Specification, Collection, and Use.","authors":"Daniel A Adler, Yuewen Yang, Thalia Viranda, Anna R Van Meter, Emma Elizabeth McGinty, Tanzeem Choudhury","doi":"10.1145/3706598.3713481","DOIUrl":"10.1145/3706598.3713481","url":null,"abstract":"<p><p>Health information technologies are transforming how mental healthcare is paid for through value-based care programs, which tie payment to data quantifying care outcomes. But, it is unclear what outcomes data these technologies should store, how to engage users in data collection, and how outcomes data can improve care. Given these challenges, we conducted interviews with 30 U.S.-based mental health clinicians to explore the design space of health information technologies that support outcomes data specification, collection, and use in value-based mental healthcare. Our findings center clinicians' perspectives on aligning outcomes data for payment programs and care; opportunities for health technologies and personal devices to improve data collection; and considerations for using outcomes data to hold stakeholders including clinicians, health insurers, and social services financially accountable in value-based mental healthcare. We conclude with implications for future research designing and developing technologies supporting value-based care across stakeholders involved with mental health service delivery.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12218218/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144556119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"All Day, Every Day, Listening to Trauma": Investigating Features of Digital Interventions for Empathy-Based Stress and Burnout. “整天,每天,倾听创伤”:调查基于共情的压力和倦怠的数字干预特征。
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3713588
Connie W Chau, Colleen Norton, Kaylee Payne Kruzan, Maia Jacobs
{"title":"\"All Day, Every Day, Listening to Trauma\": Investigating Features of Digital Interventions for Empathy-Based Stress and Burnout.","authors":"Connie W Chau, Colleen Norton, Kaylee Payne Kruzan, Maia Jacobs","doi":"10.1145/3706598.3713588","DOIUrl":"10.1145/3706598.3713588","url":null,"abstract":"<p><p>Frontline workers (FLWs) in gender-based violence (GBV) service provision regularly engage in intense emotional labor to provide survivors of GBV with essential, often life-saving, services. However, FLWs experience intense burnout, resulting in turnover rates as high as 50% annually and a critical loss of services for survivors. In order to design digital burnout interventions in a context where so few exist, we recruited 15 FLWs for a 3-stage qualitative study where they used two existing applications to reflect on, and reimagine, concrete design features necessary to address FLW burnout in GBV service provision. We contribute important findings regarding designing specifically for empathy-based stress (EBS) in frontline work contexts, preferences for activities, desired interactivity, among other requirements for interventions. We synthesize our design recommendations through an example scenario of a collaborative just-in-time adaptive intervention (co-JITAI) system that integrates peer-based support that can adapt to users' changing needs and contexts over time.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13077721/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147693987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Visual Perception: Insights from Smartphone Interaction of Visually Impaired Users with Large Multimodal Models. 超越视觉感知:视觉受损用户与大型多模态模型的智能手机交互的见解。
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3714210
Jingyi Xie, Rui Yu, H E Zhang, Syed Masum Billah, Sooyeon Lee, John M Carroll
{"title":"Beyond Visual Perception: Insights from Smartphone Interaction of Visually Impaired Users with Large Multimodal Models.","authors":"Jingyi Xie, Rui Yu, H E Zhang, Syed Masum Billah, Sooyeon Lee, John M Carroll","doi":"10.1145/3706598.3714210","DOIUrl":"10.1145/3706598.3714210","url":null,"abstract":"<p><p>Large multimodal models (LMMs) have enabled new AI-powered applications that help people with visual impairments (PVI) receive natural language descriptions of their surroundings through audible text. We investigated how this emerging paradigm of visual assistance transforms how PVI perform and manage their daily tasks. Moving beyond basic usability assessments, we examined both the capabilities and limitations of LMM-based tools in personal and social contexts, while exploring design implications for their future development. Through interviews with 14 visually impaired users and analysis of image descriptions from both participants and social media using Be My AI (an LMM-based application), we identified two key limitations. First, these systems' context awareness suffers from hallucinations and misinterpretations of social contexts, styles, and human identities. Second, their intent-oriented capabilities often fail to grasp and act on users' intentions. Based on these findings, we propose design strategies for improving both human-AI and AI-AI interactions, contributing to the development of more effective, interactive, and personalized assistive technologies.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"25 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12338113/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VideoA11y: Method and Dataset for Accessible Video Description. VideoA11y:可访问视频描述的方法和数据集。
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3714096
Chaoyu Li, Sid Padmanabhuni, Maryam S Cheema, Hasti Seifi, Pooyan Fazli
{"title":"VideoA11y: Method and Dataset for Accessible Video Description.","authors":"Chaoyu Li, Sid Padmanabhuni, Maryam S Cheema, Hasti Seifi, Pooyan Fazli","doi":"10.1145/3706598.3714096","DOIUrl":"10.1145/3706598.3714096","url":null,"abstract":"<p><p>Video descriptions are crucial for blind and low vision (BLV) users to access visual content. However, current artificial intelligence models for generating descriptions often fall short due to limitations in the quality of human annotations within training datasets, resulting in descriptions that do not fully meet BLV users' needs. To address this gap, we introduce VideoA11y, an approach that leverages multimodal large language models (MLLMs) and video accessibility guidelines to generate descriptions tailored for BLV individuals. Using this method, we have curated VideoA11y-40K, the largest and most comprehensive dataset of 40,000 videos described for BLV users. Rigorous experiments across 15 video categories, involving 347 sighted participants, 40 BLV participants, and seven professional describers, showed that VideoA11y descriptions outperform novice human annotations and are comparable to trained human annotations in clarity, accuracy, objectivity, descriptiveness, and user satisfaction. We evaluated models on VideoA11y-40K using both standard and custom metrics, demonstrating that MLLMs fine-tuned on this dataset produce high-quality accessible descriptions. Code and dataset are available at https://people-robots.github.io/VideoA11y/.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398407/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tap&Say: Touch Location-Informed Large Language Model for Multimodal Text Correction on Smartphones. Tap&Say:用于智能手机多模态文本校正的触摸位置通知大型语言模型。
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3713376
Maozheng Zhao, Shanqing Cai, Shumin Zhai, Michael Xuelin Huang, Henry Huang, I V Ramakrishnan, Nathan G Huang, Michael G Huang, Xiaojun Bi
{"title":"Tap&Say: Touch Location-Informed Large Language Model for Multimodal Text Correction on Smartphones.","authors":"Maozheng Zhao, Shanqing Cai, Shumin Zhai, Michael Xuelin Huang, Henry Huang, I V Ramakrishnan, Nathan G Huang, Michael G Huang, Xiaojun Bi","doi":"10.1145/3706598.3713376","DOIUrl":"10.1145/3706598.3713376","url":null,"abstract":"<p><p>While voice input offers a convenient alternative to traditional text editing on mobile devices, practical implementations face two key challenges: 1) reliably distinguishing between editing commands and content dictation, and 2) effortlessly pinpointing the intended edit location. We propose Tap&Say, a novel multimodal system that combines touch interactions with Large Language Models (LLMs) for accurate text correction. By tapping near an error, users signal their edit intent and location, addressing both challenges. Then, the user speaks the correction text. Tap&Say utilizes the touch location, voice input, and existing text to generate contextually relevant correction suggestions. We propose a novel <i>touch location-informed attention</i> layer that integrates the tap location into the LLM's attention mechanism, enabling it to utilize the tap location for text correction. We fine-tuned the touch location-informed LLM on synthetic touch locations and correction commands, achieving significantly higher correction accuracy than the state-of-the-art method VT [45]. A 16-person user study demonstrated that Tap&Say outperforms VT [45] with 16.4% shorter task completion time and 47.5% fewer keyboard clicks and is preferred by users.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12723524/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145829274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What's In Your Kit? Mental Health Technology Kits for Depression Self-Management. 你的工具箱里有什么?抑郁症自我管理的心理健康技术工具包。
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3713585
Eleanor R Burgess, David C Mohr, Sean A Munson, Madhu C Reddy
{"title":"What's In Your Kit? Mental Health Technology Kits for Depression Self-Management.","authors":"Eleanor R Burgess, David C Mohr, Sean A Munson, Madhu C Reddy","doi":"10.1145/3706598.3713585","DOIUrl":"10.1145/3706598.3713585","url":null,"abstract":"<p><p>This paper characterizes the mental health technology \"kits\" of individuals managing depression: the specific technologies on their digital devices and physical items in their environments that people turn to as part of their mental health management. We interviewed 28 individuals living across the United States who use bundles of connected tools for both individual and collaborative mental health activities. We contribute to the HCI community by conceptualizing these tool assemblages that people managing depression have constructed over time. We detail categories of tools, describe kit characteristics (intentional, adaptable, available), and present participant ideas for future mental health support technologies. We then discuss what a mental health technology kit perspective means for researchers and designers and describe design principles (building within current toolkits; creating new tools from current self-management strategies; and identifying gaps in people's current kits) to support depression self-management across an evolving set of tools.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12118807/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书