Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility最新文献

筛选
英文 中文
Improving Real-Time Captioning Experiences for Deaf and Hard of Hearing Students 改善聋哑和听力障碍学生的实时字幕体验
Saba Kawas, G. Karalis, T. Wen, R. Ladner
{"title":"Improving Real-Time Captioning Experiences for Deaf and Hard of Hearing Students","authors":"Saba Kawas, G. Karalis, T. Wen, R. Ladner","doi":"10.1145/2982142.2982164","DOIUrl":"https://doi.org/10.1145/2982142.2982164","url":null,"abstract":"We take a qualitative approach to understanding deaf and hard of hearing (DHH) students' experiences with real-time captioning as an access technology in mainstream university classrooms. We consider both existing human-based captioning as well as new machine-based solutions that use automatic speech recognition (ASR). We employed a variety of qualitative research methods to gather data about students' captioning experiences including in-class observations, interviews, diary studies, and usability evaluations. We also conducted a co-design workshop with 8 stakeholders after our initial research findings. Our results show that accuracy and reliability of the technology are still the most important issues across captioning solutions. However, we additionally found that current captioning solutions tend to limit students' autonomy in the classroom and present a variety of user experience shortcomings, such as complex setups, poor feedback and limited control over caption presentation. Based on these findings, we propose design requirements and recommend features for real-time captioning in mainstream classrooms.","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115940773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
An Approach to Audio-Only Editing for Visually Impaired Seniors 针对视障老年人的纯音频编辑方法
Robin N. Brewer, M. Cartwright, A. Karp, Bryan Pardo, Anne Marie Piper
{"title":"An Approach to Audio-Only Editing for Visually Impaired Seniors","authors":"Robin N. Brewer, M. Cartwright, A. Karp, Bryan Pardo, Anne Marie Piper","doi":"10.1145/2982142.2982196","DOIUrl":"https://doi.org/10.1145/2982142.2982196","url":null,"abstract":"Older adults and people with vision impairments are increasingly using phones to receive audio-based information and want to publish content online but must use complex audio recording/editing tools that often rely on inaccessible graphical interfaces. This poster describes the design of an accessible audio-based interface for post-processing audio content created by visually impaired seniors. We conducted a diary study with five older adults with vision impairments to understand how to design a system that would allow them to edit content they record using an audio-only interface. Our findings can help inform the development of accessible audio-editing interfaces for people with vision impairments more broadly.","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128320798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Motivating Individuals with Spastic Cerebral Palsy to Speak Using Mobile Speech Recognition 使用移动语音识别激励痉挛性脑瘫患者说话
Zachary Rubin, S. Kurniawan, Taylor Gotfrid, Annie Pugliese
{"title":"Motivating Individuals with Spastic Cerebral Palsy to Speak Using Mobile Speech Recognition","authors":"Zachary Rubin, S. Kurniawan, Taylor Gotfrid, Annie Pugliese","doi":"10.1145/2982142.2982203","DOIUrl":"https://doi.org/10.1145/2982142.2982203","url":null,"abstract":"Individuals with cerebral palsy (CP) struggle with conditions such as dysarthria, dysphagia, and dyspraxia as they speak. While speech therapy is successful in practice, outside practice requires increased commitment and effort from caregivers. Researchers developed a speech recognition game designed to encourage out-of-office exercises and motivate users to practice. Next they recruited a participant with cerebral palsy to investigate the performance of the system in a live environment. The participant joined the game after demonstration from the caregiver and temporarily increased speech loudness and clarity during play. The participant found sound effects more rewarding than animations. The total number of sentences spoken during the session was found to be less than half that of a speaker without any impairment. Researchers also observed two instances of cheating. This work provides insight into the automated motivation of motivating speech production with individuals with cerebral palsy.","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131862016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Eye-Gaze With Predictive Link Following Improves Accessibility as a Mouse Pointing Interface 带有预测链接跟踪的眼睛注视改善了鼠标指向界面的可访问性
Jason Vazquez-Li, L. Stachecki, John J. Magee
{"title":"Eye-Gaze With Predictive Link Following Improves Accessibility as a Mouse Pointing Interface","authors":"Jason Vazquez-Li, L. Stachecki, John J. Magee","doi":"10.1145/2982142.2982208","DOIUrl":"https://doi.org/10.1145/2982142.2982208","url":null,"abstract":"We propose a target-aware pointing approach to address one predominant problem in using eye-controlled mouse replacement software: the lack of high-precision movement. Our approach is based on Predictive Link Following, which alleviates the difficulties with link selection when using mouse replacement interfaces by predicting which link should be clicked based on the proximity of the cursor to the link. For cursor control via eye movement, an eye tracking algorithm was implemented using the Tobii EyeX device to detect and translate gaze location to screen coordinates. We conducted an experement comparing eye-gaze controlled mouse pointing with and without the Predictive Link following approach. Our results demonstrate increased accuracy of our system compared to just using eye-controlled mouse pointing.","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122550018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Session details: Users with Developmental Disabilities 会议详情:有发育障碍的用户
Kyle Rector
{"title":"Session details: Users with Developmental Disabilities","authors":"Kyle Rector","doi":"10.1145/3254065","DOIUrl":"https://doi.org/10.1145/3254065","url":null,"abstract":"","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127805784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Tool for Capturing Essential Preferences 捕获基本偏好的工具
Dana Ayotte, Michelle Brennan, Nancy J. Frishberg, Cynthia Jimes, Lisa Petrides, W. Quesenbery, M. Rothberg, R. Schwerdtfeger, J. Tobias, J. Treviranus, Shari Trewin, G. Vanderheiden
{"title":"A Tool for Capturing Essential Preferences","authors":"Dana Ayotte, Michelle Brennan, Nancy J. Frishberg, Cynthia Jimes, Lisa Petrides, W. Quesenbery, M. Rothberg, R. Schwerdtfeger, J. Tobias, J. Treviranus, Shari Trewin, G. Vanderheiden","doi":"10.1145/2982142.2982155","DOIUrl":"https://doi.org/10.1145/2982142.2982155","url":null,"abstract":"For some people, interaction preference settings like large fonts or speech output are essential for technology access. We demonstrate a 'First Discovery Tool' intended as an easy and accessible way for people to discover and set preferences to address major access barriers. The tool is designed to support people who have limited technology experience or confidence. Testing in educational and senior settings found that most participants were able to understand the preferences offered, and some discovered helpful options they were previously not aware of.","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115947921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"Holy Starches Batman!! We are Getting Walloped!": Crowdsourcing Comic Book Transcriptions “神圣的淀粉蝙蝠侠!!”我们被打败了!:众包漫画书转录
C. Samson, Casey Fiesler, Shaun K. Kane
{"title":"\"Holy Starches Batman!! We are Getting Walloped!\": Crowdsourcing Comic Book Transcriptions","authors":"C. Samson, Casey Fiesler, Shaun K. Kane","doi":"10.1145/2982142.2982211","DOIUrl":"https://doi.org/10.1145/2982142.2982211","url":null,"abstract":"Comic books are among the most popular forms of popular media, but most comics are not provided in an accessible format. Creating an accessible transcript of a comic book may be more challenging than simply describing the images, as comics involve complex interplay between words and images, and often feature long-running and complex storylines. In this poster we describe a pilot study exploring the feasibility of crowdsourcing transcriptions of comic book pages. We recruited 60 crowd workers and asked them to transcribe a page of a comic book; half were told that the description was for a blind person, and half were not. We found that people who knew that they were transcribing for a blind person produced longer, more detailed descriptions. Our results also suggest that comic book knowledge may have at least some small impact on description detail.","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116685205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Promoting Strategic Research on Inclusive Access to Rich Online Content and Services 推进网络丰富内容和服务包容性获取战略研究
C. Lewis, Shaun K. Kane, R. Ladner
{"title":"Promoting Strategic Research on Inclusive Access to Rich Online Content and Services","authors":"C. Lewis, Shaun K. Kane, R. Ladner","doi":"10.1145/2982142.2982193","DOIUrl":"https://doi.org/10.1145/2982142.2982193","url":null,"abstract":"How can the broader field of computer science research be harnessed to address challenges and opportunities in accessibility? This poster summarizes the findings of a workshop, sponsored by the Computing Community Consortium (USA), that brought together computer scientists, representatives of disability advocacy organizations, people from industry, and government employees to develop an agenda for strategic research. Members of the ASSETS community may find the reports useful in organizing collaborative projects with others in the computer science community.","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115768015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Evaluation of SingleTapBraille Keyboard: A Text Entry Method that Utilizes Braille Patterns on Touchscreen Devices 单字盲文键盘的评估:在触屏设备上利用盲文模式的文本输入方法
Maraim Alnfiai, S. Sampalli
{"title":"An Evaluation of SingleTapBraille Keyboard: A Text Entry Method that Utilizes Braille Patterns on Touchscreen Devices","authors":"Maraim Alnfiai, S. Sampalli","doi":"10.1145/2982142.2982161","DOIUrl":"https://doi.org/10.1145/2982142.2982161","url":null,"abstract":"This paper provides an evaluation of the SingleTapBraille keyboard, designed to assist people with no or low vision in using touchscreen smartphones. This application allows blind users to input characters based on braille patterns. To assess SingleTapBraille, this study compares its performance with that of the commonly used QWERTY keyboard. We conducted an evaluation study with 7 blind participants to examine the performance of both keyboards on Android platforms. Overall, participants were able to quickly adjust to SingleTapBraille and type on touchscreen devices using their knowledge of Braille patterns within fifteen to twenty minutes of introduction to the system. The SingleTapBraille keyboard was better than the QWERTY keyboard in terms of both speed and accuracy, indicating that SingleTapBraille represents an improvement over existing alternatives in making touchscreen keyboards more accessible for blind users. Based on the evaluation results and the feedback of our participants, we discuss the strengths and weaknesses of previous keyboards that have been used by participants, as well as those of SingleTapBraille. In doing so, we consider possible design improvements for the future development of accessible keyboards for blind users.","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116216173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Comparing Tactile, Auditory, and Visual Assembly Error-Feedback for Workers with Cognitive Impairments 认知障碍工人触觉、听觉和视觉装配错误反馈的比较
T. Kosch, R. Kettner, Markus Funk, A. Schmidt
{"title":"Comparing Tactile, Auditory, and Visual Assembly Error-Feedback for Workers with Cognitive Impairments","authors":"T. Kosch, R. Kettner, Markus Funk, A. Schmidt","doi":"10.1145/2982142.2982157","DOIUrl":"https://doi.org/10.1145/2982142.2982157","url":null,"abstract":"More and more industrial manufacturing companies are outsourcing assembly tasks to sheltered work organizations where cognitively impaired workers are employed. To facilitate these assembly tasks assistive systems have been introduced to provide cognitive assistance. While previous work found that these assistive systems have a great impact on the workers' performance in giving assembly instructions, these systems are further capable of detecting errors and notifying the worker of an assembly error. However, the topic of how assembly errors are presented to cognitively impaired workers has not been analyzed scientifically. In this paper, we close this gap by comparing tactile, auditory, and visual error feedback in a user study with 16 cognitively impaired workers. The results reveal that visual error feedback leads to a significantly faster assembly time compared to tactile error feedback. Further, we discuss design implications for providing error feedback for workers with cognitive impairments.","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126600077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信