Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility最新文献

筛选
英文 中文
Investigating Cursor-based Interactions to Support Non-Visual Exploration in the Real World 研究基于光标的交互以支持现实世界中的非视觉探索
Anhong Guo, Saige McVea, Xu Wang, Patrick Clary, Ken Goldman, Yang Li, Yu Zhong, Jeffrey P. Bigham
{"title":"Investigating Cursor-based Interactions to Support Non-Visual Exploration in the Real World","authors":"Anhong Guo, Saige McVea, Xu Wang, Patrick Clary, Ken Goldman, Yang Li, Yu Zhong, Jeffrey P. Bigham","doi":"10.1145/3234695.3236339","DOIUrl":"https://doi.org/10.1145/3234695.3236339","url":null,"abstract":"The human visual system processes complex scenes to focus attention on relevant items. However, blind people cannot visually skim for an area of interest. Instead, they use a combination of contextual information, knowledge of the spatial layout of their environment, and interactive scanning to find and attend to specific items. In this paper, we define and compare three cursor-based interactions to help blind people attend to items in a complex visual scene: window cursor (move their phone to scan), finger cursor (point their finger to read), and touch cursor (drag their finger on the touchscreen to explore). We conducted a user study with 12 participants to evaluate the three techniques on four tasks, and found that: window cursor worked well for locating objects on large surfaces, finger cursor worked well for accessing control panels, and touch cursor worked well for helping users understand spatial layouts. A combination of multiple techniques will likely be best for supporting a variety of everyday tasks for blind users.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124091669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Using Icons to Communicate Privacy Characteristics of Adaptive Assistive Technologies 使用图标来传达自适应辅助技术的隐私特征
Kellie Poneres, Foad Hamidi, Aaron K. Massey, A. Hurst
{"title":"Using Icons to Communicate Privacy Characteristics of Adaptive Assistive Technologies","authors":"Kellie Poneres, Foad Hamidi, Aaron K. Massey, A. Hurst","doi":"10.1145/3234695.3241003","DOIUrl":"https://doi.org/10.1145/3234695.3241003","url":null,"abstract":"Adaptive assistive technologies can support the accessibility needs of people with changing abilities by monitoring and adapting to their performance over time. Despite their benefits, these systems can pose privacy threats to users whose data is collected. This issue is amplified by the ambiguity on how user performance data, which might reveal sensitive health data, is used by these applications and whether similar to medical data it is protected from unauthorized sharing with third-parties. In interviews with older adults who experience pointing difficulties, we found that participants felt a lack of agency over their collected pointing data and desired clear communication mechanisms to keep them informed about the privacy characteristics of adaptive assistive systems. Based on this input, we present an icon set, that can be used in online application stores or with the licensing agreements of adaptive systems, to visually communicate privacy characteristics to users.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132451414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mixed-Ability Collaboration for Accessible Photo Sharing 可访问照片共享的混合能力协作
Reeti Mathur, Erin L. Brady
{"title":"Mixed-Ability Collaboration for Accessible Photo Sharing","authors":"Reeti Mathur, Erin L. Brady","doi":"10.1145/3234695.3240994","DOIUrl":"https://doi.org/10.1145/3234695.3240994","url":null,"abstract":"We conducted two online surveys about current and potential collaborative photo sharing processes among blind and sighted people. We describe existing challenges that blind and visually impaired people encounter when trying to write alternative text for their own photographs and examine how their online sighted friends and family members might be able to contribute assistance as they make their content more accessible to other people with visual impairments.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132974457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Session details: Keynote 会议详情:
R. Ladner
{"title":"Session details: Keynote","authors":"R. Ladner","doi":"10.1145/3284374","DOIUrl":"https://doi.org/10.1145/3284374","url":null,"abstract":"","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127841888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactiles
Xiaoyi Zhang, Tracy Tran, Yuqian Sun, Ian Culhane, Shobhit Jain, J. Fogarty, Jennifer Mankoff
{"title":"Interactiles","authors":"Xiaoyi Zhang, Tracy Tran, Yuqian Sun, Ian Culhane, Shobhit Jain, J. Fogarty, Jennifer Mankoff","doi":"10.1145/3234695.3236349","DOIUrl":"https://doi.org/10.1145/3234695.3236349","url":null,"abstract":"The absence of tactile cues such as keys and buttons makes touchscreens difficult to navigate for people with visual impairments. Increasing tactile feedback and tangible interaction on touchscreens can improve their accessibility. However, prior solutions have either required hardware customization or provided limited functionality with static overlays. Prior investigation of tactile solutions for large touchscreens also may not address the challenges on mobile devices. We therefore present Interactiles, a low cost, portable, and unpowered system that enhances tactile interaction on Android touchscreen phones. Interactiles consists of 3D-printed hardware interfaces and software that maps interaction with that hardware to manipulation of a mobile app. The system is compatible with the built-in screen reader without requiring modification of existing mobile apps. We describe the design and implementation of Interactiles, and we evaluate its improvement in task performance and the user experience it enables with people who are blind or have low vision.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122449645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Help Kiosk: An Augmented Display System to Assist Older Adults to Learn How to Use Smart Phones 帮助亭:一个增强显示系统,帮助老年人学习如何使用智能手机
Z. Wilson, Helen Yin, S. Sarcar, R. Leung, Joanna McGrenere
{"title":"Help Kiosk: An Augmented Display System to Assist Older Adults to Learn How to Use Smart Phones","authors":"Z. Wilson, Helen Yin, S. Sarcar, R. Leung, Joanna McGrenere","doi":"10.1145/3234695.3241008","DOIUrl":"https://doi.org/10.1145/3234695.3241008","url":null,"abstract":"Older adults have difficulty using and learning to use smart phones, in part because the displays are too small to provide effective interactive help. Our work explores the use of a large display to temporarily augment the small phone display to support older adults during learning episodes. We designed and implemented a learning system called Help Kiosk which contains unique features to scaffold the smart phone learning process for older adults. We conducted a mixed-methods user study with 16 older adults (55+) to understand the impact of this unique design approach, comparing it with the smart phone's official printed instruction manual. We found Help Kiosk gave participants more confidence that they were doing the tasks correctly, and helped minimize the need to switch their attention between the instructions and their phone.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132920481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Interdependence as a Frame for Assistive Technology Research and Design 作为辅助技术研究和设计框架的相互依赖
Cynthia L. Bennett, Erin L. Brady, Stacy M. Branham
{"title":"Interdependence as a Frame for Assistive Technology Research and Design","authors":"Cynthia L. Bennett, Erin L. Brady, Stacy M. Branham","doi":"10.1145/3234695.3236348","DOIUrl":"https://doi.org/10.1145/3234695.3236348","url":null,"abstract":"In this paper, we describe interdependence for assistive technology design, a frame developed to complement the traditional focus on independence in the Assistive Technology field. Interdependence emphasizes collaborative access and people with disabilities' important and often understated contribution in these efforts. We lay the foundation of this frame with literature from the academic discipline of Disability Studies and popular media contributed by contemporary disability justice activists. Then, drawing on cases from our own work, we show how the interdependence frame (1) synthesizes findings from a growing body of research in the Assistive Technology field and (2) helps us orient to additional technology design opportunities. We position interdependence as one possible orientation to, not a prescription for, research and design practice--one that opens new design possibilities and affirms our commitment to equal access for people with disabilities.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130346204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 191
Multimodal Deep Learning using Images and Text for Information Graphic Classification 使用图像和文本进行信息图形分类的多模态深度学习
Edward Kim, Kathleen F. McCoy
{"title":"Multimodal Deep Learning using Images and Text for Information Graphic Classification","authors":"Edward Kim, Kathleen F. McCoy","doi":"10.1145/3234695.3236357","DOIUrl":"https://doi.org/10.1145/3234695.3236357","url":null,"abstract":"Information graphics, e.g. line or bar graphs, are often displayed in documents and popular media to support an intended message, but for a growing number of people, they are missing the point. The World Health Organization estimates that the number of people with vision impairment could triple in the next thirty years due to population growth and aging. If a graphic is not described, explained in the text, or missing alt tags and other metadata (as is often the case in popular media), the intended message is lost or not adequately conveyed. In this work, we describe a multimodal deep learning approach that supports the communication of the intended message. The multimodal model uses both the pixel data and text data in a single neural network to classify the information graphic into an intention category that has previously been validated as useful for people who are blind or who are visually impaired. Furthermore, we collect a new dataset of information graphics and present qualitative and quantitative results that show our multimodal model exceeds the performance of any one modality alone, and even surpasses the capabilities of the average human annotator.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126264928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
MathMelodies 2: a Mobile Assistive Application for People with Visual Impairments Developed with React Native MathMelodies 2:使用React Native开发的视觉障碍移动辅助应用程序
Niccolò Cantù, Mattia Ducci, D. Ahmetovic, C. Bernareggi, S. Mascetti
{"title":"MathMelodies 2: a Mobile Assistive Application for People with Visual Impairments Developed with React Native","authors":"Niccolò Cantù, Mattia Ducci, D. Ahmetovic, C. Bernareggi, S. Mascetti","doi":"10.1145/3234695.3241006","DOIUrl":"https://doi.org/10.1145/3234695.3241006","url":null,"abstract":"Cross-platform developing techniques have been attracting lot of attention in the last years, especially in the field of mobile application, because they enable the developers to code apps in a same programming language for different platforms (e.g. iOS and Android). One well-known framework for cross-platform development is React Native that presents some features to support accessibility to blind or visually impaired (BVI) people. However, to the best of our knowledge, the accessibility of applications developed with this framework has not been systematically investigated. In this contribution we report our experience in the development of MathMelodies 2, an application that supports BVI children to study mathematics. The former version of MathMelodies was developed with native code for iPad only, while MathMelodies 2 was developed with React Native to run on both iOS and Android smartphones and tablets.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128066961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Axessibility Axessibility
Dragan Ahmetovic, T. Armano, Cristian Bernareggi, M. Berra, Anna Capietto, Sandro Coriasco, Nadir Murru, Alice Ruighi, E. Taranto
{"title":"Axessibility","authors":"Dragan Ahmetovic, T. Armano, Cristian Bernareggi, M. Berra, Anna Capietto, Sandro Coriasco, Nadir Murru, Alice Ruighi, E. Taranto","doi":"10.1145/3234695.3241029","DOIUrl":"https://doi.org/10.1145/3234695.3241029","url":null,"abstract":"Accessing mathematical formulae within digital documents is challenging for blind people. In particular, document formats designed for printing, such as PDF, structure math content for visual access only. While accessibility features exist to present PDF content non-visually, formulae support is limited to providing replacement text that can be read by a screen reader or displayed on a braille bar. However, the operation of inserting replacement text is left to document authors, who rarely provide such content. Furthermore, at best, description of the formulae are provided. Thus, conveying detailed understanding of complex formulae is nearly impossible. In this contribution we report our ongoing research on Axessibility, a LaTeX package framework that automates the process of making mathematical formulae accessible by providing the formulae LaTeX code as PDF replacement text. Axessibility is coupled with external scripts to automate its integration in existing documents, expand user shorthand macros to standard LaTeX representation, and custom screen reader dictionaries that improve formulae reading on screen readers.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1019 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116458221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信