Hongjin Lin, Tessa Han, Krzysztof Z Gajos, Anoopum S Gupta
{"title":"Hevelius Report: Visualizing Web-Based Mobility Test Data For Clinical Decision and Learning Support.","authors":"Hongjin Lin, Tessa Han, Krzysztof Z Gajos, Anoopum S Gupta","doi":"10.1145/3663548.3688490","DOIUrl":"10.1145/3663548.3688490","url":null,"abstract":"<p><p><i>Hevelius</i>, a web-based computer mouse test, measures arm movement and has been shown to accurately evaluate severity for patients with Parkinson's disease and ataxias. A <i>Hevelius</i> session produces 32 numeric features, which may be hard to interpret, especially in time-constrained clinical settings. This work aims to support clinicians (and other stakeholders) in interpreting and connecting <i>Hevelius</i> features to clinical concepts. Through an iterative design process, we developed a visualization tool (<i>Hevelius Report</i>) that (1) abstracts six clinically relevant concepts from 32 features, (2) visualizes patient test results, and compares them to results from healthy controls and other patients, and (3) is an interactive app to meet the specific needs in different usage scenarios. Then, we conducted a preliminary user study through an online interview with three clinicians who were <i>not</i> involved in the project. They expressed interest in using <i>Hevelius Report</i>, especially for identifying subtle changes in their patients' mobility that are hard to capture with existing clinical tests. Future work will integrate the visualization tool into the current clinical workflow of a neurology team and conduct systematic evaluations of the tool's usefulness, usability, and effectiveness. <i>Hevelius Report</i> represents a promising solution for analyzing fine-motor test results and monitoring patients' conditions and progressions.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12239997/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144602405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Videoconferencing for Older Adults with Cognitive Concerns Using a Dramaturgical Lens.","authors":"Ruipu Hu, Ge Gao, Amanda Lazar","doi":"10.1145/3663548.3675647","DOIUrl":"10.1145/3663548.3675647","url":null,"abstract":"<p><p>While videoconferencing is a promising technology, it may present unique challenges and barriers for older adults with cognitive concerns. This paper presents a deconstructed view of videoconferencing technology use using a sociological dramaturgical framework developed by Erving Goffman. Our study recruited 17 older adults with varying cognitive concerns, employing technology discussion groups, interviews, and observations to gather data. Through a reflexive thematic analysis, we explore videoconferencing use among older adults with cognitive concerns, focusing on three major areas: the \"performances and roles\" where users adapt to new roles through videoconferencing; the \"backstage,\" which involves the physical and logistical setup; and the \"frontstage,\" where people communicate through audio and visual channels to present a desired impression. Our discussion generates insights into how deconstructing these elements can inform more meaningful and accessible HCI design.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12188971/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding How Blind Users Handle Object Recognition Errors: Strategies and Challenges.","authors":"Jonggi Hong, Hernisa Kacorri","doi":"10.1145/3663548.3675635","DOIUrl":"10.1145/3663548.3675635","url":null,"abstract":"<p><p>Object recognition technologies hold the potential to support blind and low-vision people in navigating the world around them. However, the gap between benchmark performances and practical usability remains a significant challenge. This paper presents a study aimed at understanding blind users' interaction with object recognition systems for identifying and avoiding errors. Leveraging a pre-existing object recognition system, URCam, fine-tuned for our experiment, we conducted a user study involving 12 blind and low-vision participants. Through in-depth interviews and hands-on error identification tasks, we gained insights into users' experiences, challenges, and strategies for identifying errors in camera-based assistive technologies and object recognition systems. During interviews, many participants preferred independent error review, while expressing apprehension toward misrecognitions. In the error identification task, participants varied viewpoints, backgrounds, and object sizes in their images to avoid and overcome errors. Even after repeating the task, participants identified only half of the errors, and the proportion of errors identified did not significantly differ from their first attempts. Based on these insights, we offer implications for designing accessible interfaces tailored to the needs of blind and low-vision users in identifying object recognition errors.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2024 ","pages":"1-15"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11872236/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AccessShare: Co-designing Data Access and Sharing with Blind People.","authors":"Rie Kamikubo, Farnaz Zamiri Zeraati, Kyungjun Lee, Hernisa Kacorri","doi":"10.1145/3663548.3675612","DOIUrl":"10.1145/3663548.3675612","url":null,"abstract":"<p><p>Blind people are often called to contribute image data to datasets for AI innovation with the hope for future accessibility and inclusion. Yet, the visual inspection of the contributed images is inaccessible. To this day, we lack mechanisms for data inspection and control that are accessible to the blind community. To address this gap, we engage 10 blind participants in a scenario where they wear smartglasses and collect image data using an AI-infused application in their homes. We also engineer a design probe, a novel data access interface called AccessShare, and conduct a co-design study to discuss participants' needs, preferences, and ideas on consent, data inspection, and control. Our findings reveal the impact of interactive informed consent and the complementary role of data inspection systems such as AccessShare in facilitating communication between data stewards and blind data contributors. We discuss how key insights can guide future informed consent and data control to promote inclusive and responsible data practices in AI.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"4 ","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12188854/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Satwik Ram Kodandaram, Utku Uckun, Xiaojun Bi, I V Ramakrishnan, Vikas Ashok
{"title":"Enabling Uniform Computer Interaction Experience for Blind Users through Large Language Models.","authors":"Satwik Ram Kodandaram, Utku Uckun, Xiaojun Bi, I V Ramakrishnan, Vikas Ashok","doi":"10.1145/3663548.3675605","DOIUrl":"10.1145/3663548.3675605","url":null,"abstract":"<p><p>Blind individuals, who by necessity depend on screen readers to interact with computers, face considerable challenges in navigating the diverse and complex graphical user interfaces of different computer applications. The heterogeneity of various application interfaces often requires blind users to remember different keyboard combinations and navigation methods to use each application effectively. To alleviate this significant interaction burden imposed by heterogeneous application interfaces, we present Savant, a novel assistive technology powered by large language models (LLMs) that allows blind screen reader users to interact uniformly with any application interface through natural language. Novelly, Savant can automate a series of tedious screen reader actions on the control elements of the application when prompted by a natural language command from the user. These commands can be flexible in the sense that the user is not strictly required to specify the exact names of the control elements in the command. A user study evaluation of Savant with 11 blind participants demonstrated significant improvements in interaction efficiency and usability compared to current practices.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11707650/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J Bern Jordan, Victoria Van Hyning, Mason A Jones, Rachael Bradley Montgomery, Elizabeth Bottner, Evan Tansil
{"title":"Information Wayfinding of Screen Reader Users: Five Personas to Expand Conceptualizations of User Experiences.","authors":"J Bern Jordan, Victoria Van Hyning, Mason A Jones, Rachael Bradley Montgomery, Elizabeth Bottner, Evan Tansil","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Screen readers are important assistive technologies for blind people, but they are complex and can be challenging to use effectively. Over the course of several studies with screen reader users, the authors have found wide variations and sometimes surprising differences in people's skills, preferences, navigation, and troubleshooting approaches when using screen readers. These differences may not always be considered in research and development. To help address this shortcoming, we have developed five user personas describing a range of screen reader experiences.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2024 47","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11872227/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Screen Magnification for Readers with Low Vision: A Study on Usability and Performance.","authors":"Meini Tang, Roberto Manduchi, Susana Chung, Raquel Prado","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We present a study with 20 participants with low vision who operated two types of screen magnification (lens and full) on a laptop computer to read two types of document (text and web page). Our purposes were to comparatively assess the two magnification modalities, and to obtain some insight into how people with low vision use the mouse to control the center of magnification. These observations may inform the design of systems for the automatic control of the center of magnification. Our results show that there were no significant differences in reading performances or in subjective preferences between the two magnification modes. However, when using the lens mode, our participants adopted more consistent and uniform mouse motion patterns, while longer and more frequent pauses and shorter overall path lengths were measured using the full mode. Analysis of the distribution of gaze points (as measured by a gaze tracker) using the full mode shows that, when reading a text document, most participants preferred to move the area of interest to a specific region of the screen.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10923554/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140095279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Blind Users Accessing Their Training Images in Teachable Object Recognizers.","authors":"Jonggi Hong, Jaina Gandhi, Ernest Essuah Mensah, Farnaz Zamiri Zeraati, Ebrima Haddy Jarjue, Kyungjun Lee, Hernisa Kacorri","doi":"10.1145/3517428.3544824","DOIUrl":"10.1145/3517428.3544824","url":null,"abstract":"<p><p>Teachable object recognizers provide a solution for a very practical need for blind people - instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in (<i>N</i> = 12) blind participants' homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10008526/pdf/nihms-1869981.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9111608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emma Dixon, Rain Michaels, Xiang Xiao, Yu Zhong, Patrick Clary, Ajit Narayanan, Robin Brewer, Amanda Lazar
{"title":"Mobile Phone Use by People with Mild to Moderate Dementia: Uncovering Challenges and Identifying Opportunities: Mobile Phone Use by People with Mild to Moderate Dementia.","authors":"Emma Dixon, Rain Michaels, Xiang Xiao, Yu Zhong, Patrick Clary, Ajit Narayanan, Robin Brewer, Amanda Lazar","doi":"10.1145/3517428.3544809","DOIUrl":"10.1145/3517428.3544809","url":null,"abstract":"<p><p>With the rising usage of mobile phones by people with mild dementia, and the documented barriers to technology use that exist for people with dementia, there is an open opportunity to study the specifics of mobile phone use by people with dementia. In this work we provide a first step towards filling this gap through an interview study with fourteen people with mild to moderate dementia. Our analysis yields insights into mobile phone use by people with mild to moderate dementia, challenges they experience with mobile phone use, and their ideas to address these challenges. Based on these findings, we discuss design opportunities to help achieve more accessible and supportive technology use for people with dementia. Our work opens up new opportunities for the design of systems focused on augmenting and enhancing the abilities of people with dementia.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10202486/pdf/nihms-1865459.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9582599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data Representativeness in Accessibility Datasets: A Meta-Analysis.","authors":"Rie Kamikubo, Lining Wang, Crystal Marte, Amnah Mahmood, Hernisa Kacorri","doi":"10.1145/3517428.3544826","DOIUrl":"10.1145/3517428.3544826","url":null,"abstract":"<p><p>As data-driven systems are increasingly deployed at scale, ethical concerns have arisen around unfair and discriminatory outcomes for historically marginalized groups that are underrepresented in training data. In response, work around AI fairness and inclusion has called for datasets that are representative of various demographic groups. In this paper, we contribute an analysis of the representativeness of age, gender, and race & ethnicity in accessibility datasets-datasets sourced from people with disabilities and older adults-that can potentially play an important role in mitigating bias for inclusive AI-infused applications. We examine the current state of representation within datasets sourced by people with disabilities by reviewing publicly-available information of 190 datasets, we call these accessibility datasets. We find that accessibility datasets represent diverse ages, but have gender and race representation gaps. Additionally, we investigate how the sensitive and complex nature of demographic variables makes classification difficult and inconsistent (<i>e.g.</i>, gender, race & ethnicity), with the source of labeling often unknown. By reflecting on the current challenges and opportunities for representation of disabled data contributors, we hope our effort expands the space of possibility for greater inclusion of marginalized communities in AI-infused systems.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10024595/pdf/nihms-1869788.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9153813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}