Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility最新文献

筛选
英文 中文
CamIO: a 3D computer vision system enabling audio/haptic interaction with physical objects by blind users CamIO:一个3D计算机视觉系统,使盲人用户能够与物理对象进行音频/触觉交互
H. Shen, Owen Edwards, Joshua A. Miele, J. Coughlan
{"title":"CamIO: a 3D computer vision system enabling audio/haptic interaction with physical objects by blind users","authors":"H. Shen, Owen Edwards, Joshua A. Miele, J. Coughlan","doi":"10.1145/2513383.2513423","DOIUrl":"https://doi.org/10.1145/2513383.2513423","url":null,"abstract":"CamIO (short for \"Camera Input-Output\") is a novel camera system designed to make physical objects (such as documents, maps, devices and 3D models) fully accessible to blind and visually impaired persons, by providing real-time audio feedback in response to the location on an object that the user is pointing to. The project will have wide ranging impact on access to graphics, tactile literacy, STEM education, independent travel and wayfinding, access to devices, and other applications to increase the independent functioning of blind, low vision and deaf-blind individuals. We describe our preliminary results with a prototype CamIO system consisting of the Microsoft Kinect camera connected to a laptop computer. An experiment with a blind user demonstrates the feasibility of the system, which issues Text-to-Speech (TTS) annotations whenever the user's fingers approach any pre-defined \"hotspot\" regions on the object.","PeriodicalId":378932,"journal":{"name":"Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124685271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Extending access to personalized verbal feedback about robots for programming students with visual impairments 为有视觉障碍的编程学生提供关于机器人的个性化口头反馈
S. Remy
{"title":"Extending access to personalized verbal feedback about robots for programming students with visual impairments","authors":"S. Remy","doi":"10.1145/2513383.2513384","DOIUrl":"https://doi.org/10.1145/2513383.2513384","url":null,"abstract":"This work demonstrates improvements in a software tool that provides verbal feedback about executed robot code. Designed for programming students with visual impairments, the tool is now multi-lingual and no longer requires locally installed text-to-speech software. These developments use cloud and web standards to provide greater flexibility in generating personalized verbal feedback.","PeriodicalId":378932,"journal":{"name":"Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129143219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Interviewing blind photographers: design insights for a smartphone application 采访盲人摄影师:智能手机应用程序的设计见解
D. Adams, Tory Gallagher, A. Ambard, S. Kurniawan
{"title":"Interviewing blind photographers: design insights for a smartphone application","authors":"D. Adams, Tory Gallagher, A. Ambard, S. Kurniawan","doi":"10.1145/2513383.2513418","DOIUrl":"https://doi.org/10.1145/2513383.2513418","url":null,"abstract":"Studies showed that people with limited or no vision are taking, storing, organizing, and sharing photos, but there isn't much information of how they do these. While the process for taking photos is somewhat understood, there has been little work researching how exactly blind people are storing, organizing, and sharing their photos without sighted help. We ran interviews with 11 people with limited to no vision who have taken digital photos, and analyzed their responses. We aim to use this information to motivate features of a smartphone application that will assist people with limited vision not only with aiming the camera to capture a \"good\" photo, but also with locating and organizing their photos so they may retrieve their photos at a later date or share them with others, online or offline.","PeriodicalId":378932,"journal":{"name":"Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"293 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125751250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Uncovering information needs for independent spatial learning for users who are visually impaired 揭示视障用户独立空间学习的信息需求
Nikola Banovic, Rachel L. Franz, K. Truong, Jennifer Mankoff, A. Dey
{"title":"Uncovering information needs for independent spatial learning for users who are visually impaired","authors":"Nikola Banovic, Rachel L. Franz, K. Truong, Jennifer Mankoff, A. Dey","doi":"10.1145/2513383.2513445","DOIUrl":"https://doi.org/10.1145/2513383.2513445","url":null,"abstract":"Sighted individuals often develop significant knowledge about their environment through what they can visually observe. In contrast, individuals who are visually impaired mostly acquire such knowledge about their environment through information that is explicitly related to them. This paper examines the practices that visually impaired individuals use to learn about their environments and the associated challenges. In the first of our two studies, we uncover four types of information needed to master and navigate the environment. We detail how individuals' context impacts their ability to learn this information, and outline requirements for independent spatial learning. In a second study, we explore how individuals learn about places and activities in their environment. Our findings show that users not only learn information to satisfy their immediate needs, but also to enable future opportunities -- something existing technologies do not fully support. From these findings, we discuss future research and design opportunities to assist the visually impaired in independent spatial learning.","PeriodicalId":378932,"journal":{"name":"Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130258962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
The feasibility of eyes-free touchscreen keyboard typing 无眼触摸屏键盘打字的可行性
K. Vertanen, Haythem Memmi, P. Kristensson
{"title":"The feasibility of eyes-free touchscreen keyboard typing","authors":"K. Vertanen, Haythem Memmi, P. Kristensson","doi":"10.1145/2513383.2513399","DOIUrl":"https://doi.org/10.1145/2513383.2513399","url":null,"abstract":"Typing on a touchscreen keyboard is very difficult without being able to see the keyboard. We propose a new approach in which users imagine a Qwerty keyboard somewhere on the device and tap out an entire sentence without any visual reference to the keyboard and without intermediate feedback about the letters or words typed. To demonstrate the feasibility of our approach, we developed an algorithm that decodes blind touchscreen typing with a character error rate of 18.5%. Our decoder currently uses three components: a model of the keyboard topology and tap variability, a point transformation algorithm, and a long-span statistical language model. Our initial results demonstrate that our proposed method provides fast entry rates and promising error rates. On one-third of the sentences, novices' highly noisy input was successfully decoded with no errors.","PeriodicalId":378932,"journal":{"name":"Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131789385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A haptic ATM interface to assist visually impaired users 一个触觉ATM界面,以帮助视障用户
B. Cassidy, G. Cockton, L. Coventry
{"title":"A haptic ATM interface to assist visually impaired users","authors":"B. Cassidy, G. Cockton, L. Coventry","doi":"10.1145/2513383.2513433","DOIUrl":"https://doi.org/10.1145/2513383.2513433","url":null,"abstract":"This paper outlines the design and evaluation of a haptic interface intended to convey non audio-visual directions to an ATM (Automated Teller Machine) user. The haptic user interface is incorporated into an ATM test apparatus on the keypad. The system adopts a well known 'clock face' metaphor and is designed to provide haptic prompts to the user in the form of directions to the current active device, e.g. card reader or cash dispenser. Results of an evaluation of the device are reported that indicate that users with varying levels of visual impairment are able to appropriately detect, distinguish and act on the prompts given to them by the haptic keypad. As well as reporting on how participants performed in the evaluation we also report the results of a semi structured interview designed to find out how acceptable participants found the technology for use on a cash machine. As a further contribution the paper also presents observations on how participants place their hands on the haptic device and compare this with their performance.","PeriodicalId":378932,"journal":{"name":"Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"322 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132665121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Physical accessibility of touchscreen smartphones 触屏智能手机的物理可访问性
Shari Trewin, C. Swart, Donna Pettick
{"title":"Physical accessibility of touchscreen smartphones","authors":"Shari Trewin, C. Swart, Donna Pettick","doi":"10.1145/2513383.2513446","DOIUrl":"https://doi.org/10.1145/2513383.2513446","url":null,"abstract":"This paper examines the use of touchscreen smartphones, focusing on physical access. Using interviews and observations, we found that participants with dexterity impairment considered a smartphone both useful and usable, but tablet devices offer several important advantages. Cost is a major barrier to adoption. We describe usability problems that are not addressed by existing accessibility options, and observe that the dexterity demands of important accessibility features made them unusable for many participants. Despite participants' enthusiasm for both smartphones and tablet devices, their potential is not yet fully realized for this population.","PeriodicalId":378932,"journal":{"name":"Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131301011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
Crowd caption correction (CCC) 人群字幕更正(CCC)
Rebecca Perkins Harrington, G. Vanderheiden
{"title":"Crowd caption correction (CCC)","authors":"Rebecca Perkins Harrington, G. Vanderheiden","doi":"10.1145/2513383.2513413","DOIUrl":"https://doi.org/10.1145/2513383.2513413","url":null,"abstract":"Captions can be found in a variety of media, including television programs, movies, webinars and telecollaboration meetings. Although very helpful, captions sometimes have errors, such as misinterpretations of what was said, missing words and misspellings of technical terms and proper names. Due to the labor intensive nature of captioning, caption providers may not have the time or in some cases, the background knowledge of meeting content, that would be needed to correct errors in the captions. The Crowd Caption Correction (CCC) feature (and service) addresses this issue by allowing meeting participants or third party individuals to make corrections to captions in realtime during a meeting. Additionally, the feature also uses the captions to create a transcript of all captions broadcast during the meeting, which users can save and reference both during the meeting and at a later date. The feature will be available as a part of the Open Access Tool Tray System (OATTS) suite of open source widgets developed under the University of Wisconsin-Madison Trace Center Telecommunications RERC. The OATTS suite is designed to increase access to information during telecollaboration for individuals with a variety of disabilities.","PeriodicalId":378932,"journal":{"name":"Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133813537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Answering visual questions with conversational crowd assistants 与对话人群助手一起回答视觉问题
Walter S. Lasecki, Phyo Thiha, Yu Zhong, Erin L. Brady, Jeffrey P. Bigham
{"title":"Answering visual questions with conversational crowd assistants","authors":"Walter S. Lasecki, Phyo Thiha, Yu Zhong, Erin L. Brady, Jeffrey P. Bigham","doi":"10.1145/2513383.2517033","DOIUrl":"https://doi.org/10.1145/2513383.2517033","url":null,"abstract":"Blind people face a range of accessibility challenges in their everyday lives, from reading the text on a package of food to traveling independently in a new place. Answering general questions about one's visual surroundings remains well beyond the capabilities of fully automated systems, but recent systems are showing the potential of engaging on-demand human workers (the crowd) to answer visual questions. The input to such systems has generally been a single image, which can limit the interaction with a worker to one question; or video streams where systems have paired the end user with a single worker, limiting the benefits of the crowd. In this paper, we introduce Chorus:View, a system that assists users over the course of longer interactions by engaging workers in a continuous conversation with the user about a video stream from the user's mobile device. We demonstrate the benefit of using multiple crowd workers instead of just one in terms of both latency and accuracy, then conduct a study with 10 blind users that shows Chorus:View answers common visual questions more quickly and accurately than existing approaches. We conclude with a discussion of users' feedback and potential future work on interactive crowd support of blind users.","PeriodicalId":378932,"journal":{"name":"Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129992119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
Optimization of switch keyboards 开关键盘的优化
Xiao Zhang, Kan Fang, G. Francis
{"title":"Optimization of switch keyboards","authors":"Xiao Zhang, Kan Fang, G. Francis","doi":"10.1145/2513383.2513394","DOIUrl":"https://doi.org/10.1145/2513383.2513394","url":null,"abstract":"Patients with motor control difficulties often \"type\" on a computer using a switch keyboard to guide a scanning cursor to text elements. We show how to optimize some parts of the design of switch keyboards by casting the design problem as mixed integer programming. A new algorithm to find an optimized design solution is approximately 3600 times faster than a previous algorithm, which was also susceptible to finding a non-optimal solution. The optimization requires a model of the probability of an entry error, and we show how to build such a model from experimental data. Example optimized keyboards are demonstrated.","PeriodicalId":378932,"journal":{"name":"Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128979999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信