E. J. Edwards, Kyle Lewis Polster, Isabel Tuason, Emily Blank, Michael Gilbert, Stacy M. Branham
{"title":"\"That's in the eye of the beholder\": Layers of Interpretation in Image Descriptions for Fictional Representations of People with Disabilities","authors":"E. J. Edwards, Kyle Lewis Polster, Isabel Tuason, Emily Blank, Michael Gilbert, Stacy M. Branham","doi":"10.1145/3441852.3471222","DOIUrl":"https://doi.org/10.1145/3441852.3471222","url":null,"abstract":"Image accessibility is an established research area in Accessible Computing and a key area of digital accessibility for blind and low vision (BLV) people worldwide. Recent work has delved deeper into the question of how image descriptions should properly reflect the complexities of marginalized identity. However, when real subjects are not available to consult on their preferred identity terminology, as is the case with fictional representations of disability, the issue arises again of how to create accurate and sensitive image descriptions. We worked with 25 participants to assess and iteratively co-design image descriptions for nine fictional representations of people with disabilities. Through nine focus groups and nineteen interviews, we discovered five key themes which we present here along with an analysis of the layers of interpretation at work in the production and consumption of image descriptions for fictional representations.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134496967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maryam Bandukda, C. Holloway, Aneesha Singh, G. Barbareschi, N. Bianchi-Berthouze
{"title":"Opportunities for Supporting Self-efficacy Through Orientation & Mobility Training Technologies for Blind and Partially Sighted People","authors":"Maryam Bandukda, C. Holloway, Aneesha Singh, G. Barbareschi, N. Bianchi-Berthouze","doi":"10.1145/3441852.3471224","DOIUrl":"https://doi.org/10.1145/3441852.3471224","url":null,"abstract":"Orientation and mobility (O&M) training provides essential skills and techniques for safe and independent mobility for blind and partially sighted (BPS) people. The demand for O&M training is increasing as the number of people living with vision impairment increases. Despite the growing portfolio of HCI research on assistive technologies (AT), few studies have examined the experiences of BPS people during O&M training, including the use of technology to aid O&M training. To address this gap, we conducted semi-structured interviews with 20 BPS people and 8 Mobility and Orientation Trainers (MOT). The interviews were thematically analysed and organised into four overarching themes discussing factors influencing the self-efficacy belief of BPS people: Tools and Strategies for O&M training, Technology Use in O&M Training, Changing Personal and Social Circumstances, and Social Influences. We further highlight opportunities for combinations of multimodal technologies to increase access to and effectiveness of O&M training.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128559628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Activity Recognition in Older Adults with Training Data from Younger Adults: Preliminary Results on in Vivo Smartwatch Sensor Data","authors":"Sabahat Fatima","doi":"10.1145/3441852.3476475","DOIUrl":"https://doi.org/10.1145/3441852.3476475","url":null,"abstract":"Self-tracking using commodity wearables such as smartwatches can help older adults reduce sedentary behaviors and engage in physical activity. However, activity recognition applications that are typically deployed in these wearables tend to be trained on datasets that best represent younger adults. We explore how our activity recognition model, a hybrid of long short-term memory and convolutional layers, pre-trained on smartwatch data from younger adults, performs on older adult data. We report results on week-long data from two older adults collected in a preliminary study in the wild with ground-truth annotations based on activPAL, a thigh-worn sensor. We find that activity recognition for older adults remains challenging even when comparing our model’s performance to state of the art deployed models such as the Google Activity Recognition API. More so, we show that models trained on younger adults tend to perform worse on older adults.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115638802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Collecting Sidewalk Network Data at Scale for Accessible Pedestrian Travel","authors":"Yuxiang Zhang, Sachin Mehta, A. Caspi","doi":"10.1145/3441852.3476560","DOIUrl":"https://doi.org/10.1145/3441852.3476560","url":null,"abstract":"Sidewalks are central to an accessible transportation network, as they connect all other transportation modes. The street-side environment, especially the location and connectivity of the sidewalks, has not been widely integrated into information systems used to report accessibility and walkability in wayfinding applications. Typical sidewalk mapping methods rely on surveyor collections, which are non-standardized, laborious, costly, difficult to maintain, and do not scale well. In this work, we introduce a working proof-of-concept system for automated mapping of sidewalk networks on portable computing devices. Our system utilizes efficient neural networks, image sensing, GPS, and compact hardware to perform sidewalk mapping on portable devices. We discuss future opportunities for cities and transportation agencies to advance their knowledge of the transportation network they own and manage in order to improve accessibility for all travelers.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115948322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lilian de Greef, Dominik Moritz, Cynthia L. Bennett
{"title":"Interdependent Variables: Remotely Designing Tactile Graphics for an Accessible Workflow","authors":"Lilian de Greef, Dominik Moritz, Cynthia L. Bennett","doi":"10.1145/3441852.3476468","DOIUrl":"https://doi.org/10.1145/3441852.3476468","url":null,"abstract":"In this experience report, we offer a case study of blind and sighted colleagues creating an accessible workflow to collaborate on a data visualization-focused project. We outline our process for making the project's shared data representations accessible through incorporating both handmade and machine-embossed tactile graphics. We also share lessons and strategies for considering team needs and addressing contextual constraints like remote collaboration during the COVID-19 pandemic. More broadly, this report contributes to ongoing research into the ways accessibility is interdependent by arguing that access work must be a collective responsibility and properly supported with recognition, resources, and infrastructure.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121683015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is home-based webcam eye-tracking with older adults living with and without Alzheimer's disease feasible?","authors":"A. Greenaway, S. Nasuto, Aileen Ho, F. Hwang","doi":"10.1145/3441852.3476565","DOIUrl":"https://doi.org/10.1145/3441852.3476565","url":null,"abstract":"Home-based eye tracking studies using built-in webcams are typically conducted with younger people and incur long set-up times and a large number of calibration failures. We investigated the set-up time, number of calibration failures and issues faced by twelve older adults living with and without Alzheimer's disease during home-based eye tracking. We found that home-based eye tracking is feasible with set-up support and we provide recommendations for future studies of this nature.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125309156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ather Sharif, S. Chintalapati, J. Wobbrock, Katharina Reinecke
{"title":"Understanding Screen-Reader Users’ Experiences with Online Data Visualizations","authors":"Ather Sharif, S. Chintalapati, J. Wobbrock, Katharina Reinecke","doi":"10.1145/3441852.3471202","DOIUrl":"https://doi.org/10.1145/3441852.3471202","url":null,"abstract":"Online data visualizations are widely used to communicate information from simple statistics to complex phenomena, supporting people in gaining important insights from data. However, due to the defining visual nature of data visualizations, extracting information from visualizations can be difficult or impossible for screen-reader users. To assess screen-reader users’ challenges with online data visualizations, we conducted two empirical studies: (1) A qualitative study with nine screen-reader users, and (2) a quantitative study with 36 screen-reader and 36 non-screen-reader users. Our results show that due to the inaccessibility of online data visualizations, screen-reader users extract information 61.48% less accurately and spend 210.96% more time interacting with online data visualizations compared to non-screen-reader users. Additionally, our findings show that online data visualizations are commonly indiscoverable to screen readers. In visualizations that are discoverable and comprehensible, screen-reader users suggested tabular and textual representation of data as techniques to improve the accessibility of online visualizations. Taken together, our results provide empirical evidence of the inequalities screen-readers users face in their interaction with online data visualizations.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122435732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Explorer: Multi-Layered Touch Exploration to Make Images Accessible","authors":"Jaewook Lee, Yi-Hao Peng, Jaylin Herskovitz, Anhong Guo","doi":"10.1145/3441852.3476548","DOIUrl":"https://doi.org/10.1145/3441852.3476548","url":null,"abstract":"Blind or visually impaired (BVI) individuals often rely on alternative text (alt-text) in order to understand an image; however, alt-text is often missing or incomplete. Automatically-generated captions are a more scalable alternative, but they are also often missing crucial details, and, sometimes, are completely incorrect, which may still be falsely trusted by BVI users. We hypothesize that additional information could help BVI users better judge the correctness of an auto-generated caption. To achieve this, we present Image Explorer, a touch-based multi-layered image exploration system that enables users to explore the spatial layout and information hierarchies in an image. Image Explorer leverages several off-the-shelf deep learning models to generate segmentation and labeling results for an image, combines and filters the generated information, and presents the resulted information in hierarchical layers. In a pilot study with three BVI users, participants used Image Explorer, Seeing AI, and Facebook to explore images with auto-generated captions of diverging quality, and judge the correctness of the captions. Preliminary results show that participants made more accurate judgements about the correctness of the captions when using Image Explorer, although they were highly confident about their judgement regardless of the tool used. Overall, Image Explorer is a novel touch exploration system that makes images more accessible for BVI users by potentially encouraging skepticism and enabling users to independently validate auto-generated captions.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121782779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"I See What You’re Saying: A Literature Review of Eye Tracking Research in Communication of Deaf or Hard of Hearing Users","authors":"Chanchal Agrawal, R. Peiris","doi":"10.1145/3441852.3471209","DOIUrl":"https://doi.org/10.1145/3441852.3471209","url":null,"abstract":"Deaf or hard-of-hearing (DHH) individuals heavily rely on their visual senses to be aware about their environment, giving them heightened visual cognition and improved attention management strategies. Thus, the eyes have shown to play a significant role in these visual communication practices and, therefore, many various researches have adopted methodologies, specifically eye-tracking, to understand the gaze patterns and analyze the behavior of DHH individuals. In this paper, we provide a literature review from 55 papers and data analysis from eye-tracking studies concerning hearing impairment, attention management strategies, and their mode of communication such as Visual and Textual based communication. Through this survey, we summarize the findings and provide future research directions.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133183977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beyond Adaptive Sports: Challenges & Opportunities to Improve Accessibility and Analytics","authors":"Rushil Khurana, Ashley Wang, Patrick Carrington","doi":"10.1145/3441852.3471223","DOIUrl":"https://doi.org/10.1145/3441852.3471223","url":null,"abstract":"A recent surge in sensing platforms for sports has been accompanied by drastic improvements in the quality of data analytics. This improved quality has catalyzed notable progress in training techniques, athletic performance tracking, real-time strategy management, and even better refereeing. However, despite a sustained growth in the number of para-athletes, there has been little exploration into the accessibility and data analytics needs for adaptive sports. We interviewed 18 participants in different roles (athletes, coaches, and high-performance managers) across six adaptive sports. We probed them on their current practices, existing challenges, and analytical needs. We uncovered common themes prevalent across all six sports and further examined findings in three groups: (1) blind sports; (2) wheelchair sports; and (3) adaptive sports with high equipment. Our study highlights the challenges faced by different adaptive sports and unearths opportunities for future research to improve accessibility and address specific needs for each sport.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134550903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}