Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility最新文献

筛选
英文 中文
"That's in the eye of the beholder": Layers of Interpretation in Image Descriptions for Fictional Representations of People with Disabilities “那是旁观者的看法”:残障人士虚构形象描述的解读层次
E. J. Edwards, Kyle Lewis Polster, Isabel Tuason, Emily Blank, Michael Gilbert, Stacy M. Branham
{"title":"\"That's in the eye of the beholder\": Layers of Interpretation in Image Descriptions for Fictional Representations of People with Disabilities","authors":"E. J. Edwards, Kyle Lewis Polster, Isabel Tuason, Emily Blank, Michael Gilbert, Stacy M. Branham","doi":"10.1145/3441852.3471222","DOIUrl":"https://doi.org/10.1145/3441852.3471222","url":null,"abstract":"Image accessibility is an established research area in Accessible Computing and a key area of digital accessibility for blind and low vision (BLV) people worldwide. Recent work has delved deeper into the question of how image descriptions should properly reflect the complexities of marginalized identity. However, when real subjects are not available to consult on their preferred identity terminology, as is the case with fictional representations of disability, the issue arises again of how to create accurate and sensitive image descriptions. We worked with 25 participants to assess and iteratively co-design image descriptions for nine fictional representations of people with disabilities. Through nine focus groups and nineteen interviews, we discovered five key themes which we present here along with an analysis of the layers of interpretation at work in the production and consumption of image descriptions for fictional representations.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134496967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Opportunities for Supporting Self-efficacy Through Orientation & Mobility Training Technologies for Blind and Partially Sighted People 通过盲人和弱视人士的定向和行动训练技术支持自我效能的机会
Maryam Bandukda, C. Holloway, Aneesha Singh, G. Barbareschi, N. Bianchi-Berthouze
{"title":"Opportunities for Supporting Self-efficacy Through Orientation & Mobility Training Technologies for Blind and Partially Sighted People","authors":"Maryam Bandukda, C. Holloway, Aneesha Singh, G. Barbareschi, N. Bianchi-Berthouze","doi":"10.1145/3441852.3471224","DOIUrl":"https://doi.org/10.1145/3441852.3471224","url":null,"abstract":"Orientation and mobility (O&M) training provides essential skills and techniques for safe and independent mobility for blind and partially sighted (BPS) people. The demand for O&M training is increasing as the number of people living with vision impairment increases. Despite the growing portfolio of HCI research on assistive technologies (AT), few studies have examined the experiences of BPS people during O&M training, including the use of technology to aid O&M training. To address this gap, we conducted semi-structured interviews with 20 BPS people and 8 Mobility and Orientation Trainers (MOT). The interviews were thematically analysed and organised into four overarching themes discussing factors influencing the self-efficacy belief of BPS people: Tools and Strategies for O&M training, Technology Use in O&M Training, Changing Personal and Social Circumstances, and Social Influences. We further highlight opportunities for combinations of multimodal technologies to increase access to and effectiveness of O&M training.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128559628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Activity Recognition in Older Adults with Training Data from Younger Adults: Preliminary Results on in Vivo Smartwatch Sensor Data 老年人的活动识别与年轻人的训练数据:体内智能手表传感器数据的初步结果
Sabahat Fatima
{"title":"Activity Recognition in Older Adults with Training Data from Younger Adults: Preliminary Results on in Vivo Smartwatch Sensor Data","authors":"Sabahat Fatima","doi":"10.1145/3441852.3476475","DOIUrl":"https://doi.org/10.1145/3441852.3476475","url":null,"abstract":"Self-tracking using commodity wearables such as smartwatches can help older adults reduce sedentary behaviors and engage in physical activity. However, activity recognition applications that are typically deployed in these wearables tend to be trained on datasets that best represent younger adults. We explore how our activity recognition model, a hybrid of long short-term memory and convolutional layers, pre-trained on smartwatch data from younger adults, performs on older adult data. We report results on week-long data from two older adults collected in a preliminary study in the wild with ground-truth annotations based on activPAL, a thigh-worn sensor. We find that activity recognition for older adults remains challenging even when comparing our model’s performance to state of the art deployed models such as the Google Activity Recognition API. More so, we show that models trained on younger adults tend to perform worse on older adults.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115638802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Collecting Sidewalk Network Data at Scale for Accessible Pedestrian Travel 无障碍行人出行的大规模人行道网络数据收集
Yuxiang Zhang, Sachin Mehta, A. Caspi
{"title":"Collecting Sidewalk Network Data at Scale for Accessible Pedestrian Travel","authors":"Yuxiang Zhang, Sachin Mehta, A. Caspi","doi":"10.1145/3441852.3476560","DOIUrl":"https://doi.org/10.1145/3441852.3476560","url":null,"abstract":"Sidewalks are central to an accessible transportation network, as they connect all other transportation modes. The street-side environment, especially the location and connectivity of the sidewalks, has not been widely integrated into information systems used to report accessibility and walkability in wayfinding applications. Typical sidewalk mapping methods rely on surveyor collections, which are non-standardized, laborious, costly, difficult to maintain, and do not scale well. In this work, we introduce a working proof-of-concept system for automated mapping of sidewalk networks on portable computing devices. Our system utilizes efficient neural networks, image sensing, GPS, and compact hardware to perform sidewalk mapping on portable devices. We discuss future opportunities for cities and transportation agencies to advance their knowledge of the transportation network they own and manage in order to improve accessibility for all travelers.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115948322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Interdependent Variables: Remotely Designing Tactile Graphics for an Accessible Workflow 相互依赖的变量:远程设计触觉图形为一个可访问的工作流程
Lilian de Greef, Dominik Moritz, Cynthia L. Bennett
{"title":"Interdependent Variables: Remotely Designing Tactile Graphics for an Accessible Workflow","authors":"Lilian de Greef, Dominik Moritz, Cynthia L. Bennett","doi":"10.1145/3441852.3476468","DOIUrl":"https://doi.org/10.1145/3441852.3476468","url":null,"abstract":"In this experience report, we offer a case study of blind and sighted colleagues creating an accessible workflow to collaborate on a data visualization-focused project. We outline our process for making the project's shared data representations accessible through incorporating both handmade and machine-embossed tactile graphics. We also share lessons and strategies for considering team needs and addressing contextual constraints like remote collaboration during the COVID-19 pandemic. More broadly, this report contributes to ongoing research into the ways accessibility is interdependent by arguing that access work must be a collective responsibility and properly supported with recognition, resources, and infrastructure.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121683015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Is home-based webcam eye-tracking with older adults living with and without Alzheimer's disease feasible? 家庭网络摄像头眼动追踪对老年痴呆症患者和非老年痴呆症患者是否可行?
A. Greenaway, S. Nasuto, Aileen Ho, F. Hwang
{"title":"Is home-based webcam eye-tracking with older adults living with and without Alzheimer's disease feasible?","authors":"A. Greenaway, S. Nasuto, Aileen Ho, F. Hwang","doi":"10.1145/3441852.3476565","DOIUrl":"https://doi.org/10.1145/3441852.3476565","url":null,"abstract":"Home-based eye tracking studies using built-in webcams are typically conducted with younger people and incur long set-up times and a large number of calibration failures. We investigated the set-up time, number of calibration failures and issues faced by twelve older adults living with and without Alzheimer's disease during home-based eye tracking. We found that home-based eye tracking is feasible with set-up support and we provide recommendations for future studies of this nature.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125309156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Understanding Screen-Reader Users’ Experiences with Online Data Visualizations 理解屏幕阅读器用户使用在线数据可视化的体验
Ather Sharif, S. Chintalapati, J. Wobbrock, Katharina Reinecke
{"title":"Understanding Screen-Reader Users’ Experiences with Online Data Visualizations","authors":"Ather Sharif, S. Chintalapati, J. Wobbrock, Katharina Reinecke","doi":"10.1145/3441852.3471202","DOIUrl":"https://doi.org/10.1145/3441852.3471202","url":null,"abstract":"Online data visualizations are widely used to communicate information from simple statistics to complex phenomena, supporting people in gaining important insights from data. However, due to the defining visual nature of data visualizations, extracting information from visualizations can be difficult or impossible for screen-reader users. To assess screen-reader users’ challenges with online data visualizations, we conducted two empirical studies: (1) A qualitative study with nine screen-reader users, and (2) a quantitative study with 36 screen-reader and 36 non-screen-reader users. Our results show that due to the inaccessibility of online data visualizations, screen-reader users extract information 61.48% less accurately and spend 210.96% more time interacting with online data visualizations compared to non-screen-reader users. Additionally, our findings show that online data visualizations are commonly indiscoverable to screen readers. In visualizations that are discoverable and comprehensible, screen-reader users suggested tabular and textual representation of data as techniques to improve the accessibility of online visualizations. Taken together, our results provide empirical evidence of the inequalities screen-readers users face in their interaction with online data visualizations.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122435732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Image Explorer: Multi-Layered Touch Exploration to Make Images Accessible 图像资源管理器:多层触摸探索,使图像可访问
Jaewook Lee, Yi-Hao Peng, Jaylin Herskovitz, Anhong Guo
{"title":"Image Explorer: Multi-Layered Touch Exploration to Make Images Accessible","authors":"Jaewook Lee, Yi-Hao Peng, Jaylin Herskovitz, Anhong Guo","doi":"10.1145/3441852.3476548","DOIUrl":"https://doi.org/10.1145/3441852.3476548","url":null,"abstract":"Blind or visually impaired (BVI) individuals often rely on alternative text (alt-text) in order to understand an image; however, alt-text is often missing or incomplete. Automatically-generated captions are a more scalable alternative, but they are also often missing crucial details, and, sometimes, are completely incorrect, which may still be falsely trusted by BVI users. We hypothesize that additional information could help BVI users better judge the correctness of an auto-generated caption. To achieve this, we present Image Explorer, a touch-based multi-layered image exploration system that enables users to explore the spatial layout and information hierarchies in an image. Image Explorer leverages several off-the-shelf deep learning models to generate segmentation and labeling results for an image, combines and filters the generated information, and presents the resulted information in hierarchical layers. In a pilot study with three BVI users, participants used Image Explorer, Seeing AI, and Facebook to explore images with auto-generated captions of diverging quality, and judge the correctness of the captions. Preliminary results show that participants made more accurate judgements about the correctness of the captions when using Image Explorer, although they were highly confident about their judgement regardless of the tool used. Overall, Image Explorer is a novel touch exploration system that makes images more accessible for BVI users by potentially encouraging skepticism and enabling users to independently validate auto-generated captions.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121782779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
I See What You’re Saying: A Literature Review of Eye Tracking Research in Communication of Deaf or Hard of Hearing Users 我明白你在说什么:眼动追踪研究在聋人或重听用户交流中的文献综述
Chanchal Agrawal, R. Peiris
{"title":"I See What You’re Saying: A Literature Review of Eye Tracking Research in Communication of Deaf or Hard of Hearing Users","authors":"Chanchal Agrawal, R. Peiris","doi":"10.1145/3441852.3471209","DOIUrl":"https://doi.org/10.1145/3441852.3471209","url":null,"abstract":"Deaf or hard-of-hearing (DHH) individuals heavily rely on their visual senses to be aware about their environment, giving them heightened visual cognition and improved attention management strategies. Thus, the eyes have shown to play a significant role in these visual communication practices and, therefore, many various researches have adopted methodologies, specifically eye-tracking, to understand the gaze patterns and analyze the behavior of DHH individuals. In this paper, we provide a literature review from 55 papers and data analysis from eye-tracking studies concerning hearing impairment, attention management strategies, and their mode of communication such as Visual and Textual based communication. Through this survey, we summarize the findings and provide future research directions.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133183977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Beyond Adaptive Sports: Challenges & Opportunities to Improve Accessibility and Analytics 超越适应性运动:改善可及性和分析的挑战与机遇
Rushil Khurana, Ashley Wang, Patrick Carrington
{"title":"Beyond Adaptive Sports: Challenges & Opportunities to Improve Accessibility and Analytics","authors":"Rushil Khurana, Ashley Wang, Patrick Carrington","doi":"10.1145/3441852.3471223","DOIUrl":"https://doi.org/10.1145/3441852.3471223","url":null,"abstract":"A recent surge in sensing platforms for sports has been accompanied by drastic improvements in the quality of data analytics. This improved quality has catalyzed notable progress in training techniques, athletic performance tracking, real-time strategy management, and even better refereeing. However, despite a sustained growth in the number of para-athletes, there has been little exploration into the accessibility and data analytics needs for adaptive sports. We interviewed 18 participants in different roles (athletes, coaches, and high-performance managers) across six adaptive sports. We probed them on their current practices, existing challenges, and analytical needs. We uncovered common themes prevalent across all six sports and further examined findings in three groups: (1) blind sports; (2) wheelchair sports; and (3) adaptive sports with high equipment. Our study highlights the challenges faced by different adaptive sports and unearths opportunities for future research to improve accessibility and address specific needs for each sport.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134550903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信