The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility最新文献

筛选
英文 中文
Participatory design process for an in-vehicle affect detection and regulation system for various drivers 针对不同驾驶员的车载影响检测与调节系统的参与式设计过程
M. Jeon, Jason Roberts, Parameshwaran Raman, Jung-Bin Yim, B. Walker
{"title":"Participatory design process for an in-vehicle affect detection and regulation system for various drivers","authors":"M. Jeon, Jason Roberts, Parameshwaran Raman, Jung-Bin Yim, B. Walker","doi":"10.1145/2049536.2049602","DOIUrl":"https://doi.org/10.1145/2049536.2049602","url":null,"abstract":"Considerable research has shown that diverse affective (emotional) states influence cognitive processes and performance. To detect a driver's affective states and regulate them may help increase driving performance and safety. There are some populations who are more vulnerable to issues regarding driving, affect, and affect regulation (e.g., novice drivers, young drivers, older drivers, and drivers with TBI (Traumatic Brain Injury)). This paper describes initial findings from multiple participatory design processes, including interviews with 21 young drivers, and focus groups with a TBI driver and two driver rehab specialists. Depending on user groups, there are distinct issues and needs; therefore, differentiated approaches are needed to design an in-vehicle assistive technology system for a specific target user group.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132322010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Session details: Student research competition 会议细节:学生研究竞赛
Krzysztof Z Gajos
{"title":"Session details: Student research competition","authors":"Krzysztof Z Gajos","doi":"10.1145/3253163","DOIUrl":"https://doi.org/10.1145/3253163","url":null,"abstract":"","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115903632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
We need to communicate!: helping hearing parents of deaf children learn american sign language 我们需要沟通!帮助聋儿的父母学习美国手语
Kimberly Weaver, Thad Starner
{"title":"We need to communicate!: helping hearing parents of deaf children learn american sign language","authors":"Kimberly Weaver, Thad Starner","doi":"10.1145/2049536.2049554","DOIUrl":"https://doi.org/10.1145/2049536.2049554","url":null,"abstract":"Language immersion from birth is crucial to a child's language development. However, language immersion can be particularly challenging for hearing parents of deaf children to provide as they may have to overcome many difficulties while learning American Sign Language (ASL). We are in the process of creating a mobile application to help hearing parents learn ASL. To this end, we have interviewed members of our target population to gain understanding of their motivations and needs when learning sign language. We found that the most common motivation for parents learning ASL is better communication with their children. Parents are most interested in acquiring more fluent sign language skills through learning to read stories to their children.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123985939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Using a game controller for text entry to address abilities and disabilities specific to persons with neuromuscular diseases 使用游戏控制器进行文本输入,以解决神经肌肉疾病患者特有的能力和残疾
T. Felzer, S. Rinderknecht
{"title":"Using a game controller for text entry to address abilities and disabilities specific to persons with neuromuscular diseases","authors":"T. Felzer, S. Rinderknecht","doi":"10.1145/2049536.2049616","DOIUrl":"https://doi.org/10.1145/2049536.2049616","url":null,"abstract":"This paper proposes a poster about an alternative text entry method, based on a commercially available game controller as input device, as well as a demo of the accompanying software application. The system was originally intended for a particular gentleman with the neuromuscular disease Friedreich's Ataxia (FA), who asked us to help him - by developing an optimal keyboard replacement for him - already several years ago. Our work focused on his impressions in an initial case study testing this newest attempt. Taking the tester's comments into account, the outcome seems to be rather promising in meeting his needs, and it appears very probable that the system could be of help for anyone with a similar condition.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116759214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Improving calibration time and accuracy for situation-specific models of color differentiation 提高校准时间和准确性的具体情况下的模型的颜色区分
David R. Flatla, C. Gutwin
{"title":"Improving calibration time and accuracy for situation-specific models of color differentiation","authors":"David R. Flatla, C. Gutwin","doi":"10.1145/2049536.2049572","DOIUrl":"https://doi.org/10.1145/2049536.2049572","url":null,"abstract":"Color vision deficiencies (CVDs) cause problems in situations where people need to differentiate the colors used in digital displays. Recoloring tools exist to reduce the problem, but these tools need a model of the user's color-differentiation ability in order to work. Situation-specific models are a recent approach that accounts for all of the factors affecting a person's CVD (including genetic, acquired, and environmental causes) by using calibration data to form the model. This approach works well, but requires repeated calibration - and the best available calibration procedure takes more than 30 minutes. To address this limitation, we have developed a new situation-specific model of human color differentiation (called ICD-2) that needs far fewer calibration trials. The new model uses a color space that better matches human color vision compared to the RGB space of the old model, and can therefore extract more meaning from each calibration test. In an empirical comparison, we found that ICD-2 is 24 times faster than the old approach, and had small but significant gains in accuracy. The efficiency of ICD-2 makes it feasible for situation-specific models of individual color differentiation to be used in the real world.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123161000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
The effect of hand strength on pointing performance of users for different input devices 不同输入设备下,手的力量对用户指向性能的影响
P. Biswas, P. Langdon
{"title":"The effect of hand strength on pointing performance of users for different input devices","authors":"P. Biswas, P. Langdon","doi":"10.1145/2049536.2049611","DOIUrl":"https://doi.org/10.1145/2049536.2049611","url":null,"abstract":"We have investigated how hand strength affects pointing performance of people with and without mobility impair-ment in graphical user interfaces for four different input modalities. We have found that grip strength and active range of motion of wrist are most indicative of the point-ing performance. We have used the study to develop a set of linear equations to predict pointing time for different devices.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123691832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view platform: an accessible live classroom viewing approach for low vision students 多视点平台:为弱视学生提供一种无障碍的现场课堂观看方式
R. Kushalnagar, Stephanie A. Ludi, P. Kushalnagar
{"title":"Multi-view platform: an accessible live classroom viewing approach for low vision students","authors":"R. Kushalnagar, Stephanie A. Ludi, P. Kushalnagar","doi":"10.1145/2049536.2049600","DOIUrl":"https://doi.org/10.1145/2049536.2049600","url":null,"abstract":"We present a multiple-view platform for low vision students that utilizes students' personal smart phone cameras and tablets in the classroom. Low vision or deaf students can independently use the platform to obtain flexible, magnified views of lecture visuals, such as the presentation slides or whiteboard on their personal screen. This platform also enables cooperation among sighted and hearing classmates to provide better views for everyone, including themselves.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128539721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Evaluating importance of facial expression in american sign language and pidgin signed english animations 评价面部表情在美国手语和洋泾浜英语手语动画中的重要性
Matt Huenerfauth, Pengfei Lu, A. Rosenberg
{"title":"Evaluating importance of facial expression in american sign language and pidgin signed english animations","authors":"Matt Huenerfauth, Pengfei Lu, A. Rosenberg","doi":"10.1145/2049536.2049556","DOIUrl":"https://doi.org/10.1145/2049536.2049556","url":null,"abstract":"Animations of American Sign Language (ASL) and Pidgin Signed English (PSE) have accessibility benefits for many signers with lower levels of written language literacy. In prior experimental studies we conducted evaluating animations of ASL, native signers gave informal feedback in which they critiqued the insufficient and inaccurate facial expressions of the virtual human character. While face movements are important for conveying grammatical and prosodic information in human ASL signing, no empirical evaluation of their impact on the understandability and perceived quality of ASL animations had previously been conducted. To quantify the suggestions of deaf participants in our prior studies, we experimentally evaluated ASL and PSE animations with and without various types of facial expressions, and we found that their inclusion does lead to measurable benefits for the understandability and perceived quality of the animations. This finding provides motivation for our future work on facial expressions in ASL and PSE animations, and it lays a novel methodological groundwork for evaluating the quality of facial expressions for conveying prosodic or grammatical information.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124138739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Leveraging large data sets for user requirements analysis 利用大型数据集进行用户需求分析
M. Wolters, Vicki L. Hanson, Johanna D. Moore
{"title":"Leveraging large data sets for user requirements analysis","authors":"M. Wolters, Vicki L. Hanson, Johanna D. Moore","doi":"10.1145/2049536.2049550","DOIUrl":"https://doi.org/10.1145/2049536.2049550","url":null,"abstract":"In this paper, we show how a large demographic data set that includes only high-level information about health and disability can be used to specify user requirements for people with specific needs and impairments. As a case study, we consider adapting spoken dialogue systems (SDS) to the needs of older adults. Such interfaces are becoming increasingly prevalent in telecare and home care, where they will often be used by older adults. As our data set, we chose the English Longitudinal Survey of Ageing (ELSA), a large representative survey of the health, wellbeing, and socioeconomic status of English older adults. In an inclusion audit, we show that one in four older people surveyed by ELSA might benefit from SDS due to problems with dexterity, mobility, vision, or literacy. Next, we examine the technology that is available to our target users (technology audit) and estimate factors that might prevent older people from using SDS (exclusion audit). We conclude that while SDS are ideal for solutions that are delivered on the near ubiquitous landlines, they need to be accessible for people with mild to moderate hearing problems, and thus multimodal solutions should be based on the television, a technology even more widespread than landlines.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125214682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Supporting blind photography 支持盲拍
C. Jayant, H. Ji, Samuel White, Jeffrey P. Bigham
{"title":"Supporting blind photography","authors":"C. Jayant, H. Ji, Samuel White, Jeffrey P. Bigham","doi":"10.1145/2049536.2049573","DOIUrl":"https://doi.org/10.1145/2049536.2049573","url":null,"abstract":"Blind people want to take photographs for the same reasons as others -- to record important events, to share experiences, and as an outlet for artistic expression. Furthermore, both automatic computer vision technology and human-powered services can be used to give blind people feedback on their environment, but to work their best these systems need high-quality photos as input. In this paper, we present the results of a large survey that shows how blind people are currently using cameras. Next, we introduce EasySnap, an application that provides audio feedback to help blind people take pictures of objects and people and show that blind photographers take better photographs with this feedback. We then discuss how we iterated on the portrait functionality to create a new application called PortraitFramer designed specifically for this function. Finally, we present the results of an in-depth study with 15 blind and low-vision participants, showing that they could pick up how to successfully use the application very quickly.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130877428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 139
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信