The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility最新文献

筛选
英文 中文
Supporting deaf children's reading skills: the many challenges of text simplification 支持聋哑儿童的阅读技能:文本简化的诸多挑战
C. Vettori, O. Mich
{"title":"Supporting deaf children's reading skills: the many challenges of text simplification","authors":"C. Vettori, O. Mich","doi":"10.1145/2049536.2049608","DOIUrl":"https://doi.org/10.1145/2049536.2049608","url":null,"abstract":"Deaf children have great difficulties in reading comprehension. In our contribution, we illustrate how we have collected, simplified and presented some stories in order to render them suitable for young Italian deaf readers both from a linguistic and a formal point of view. The aim is to stimulate their pleasure of reading. The experimental data suggest that the approach is effective and that enriching the stories with static and/or animated drawings significantly improves text readability. However, they also clearly point out that textual simplification alone is not enough to meet the needs of the target group and that the story structure itself and its presentation have to be carefully planned.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130737416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Kinerehab: a kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities Kinerehab:一个基于运动的身体康复系统:一项针对有运动障碍的年轻成年人的试点研究
Jun Huang
{"title":"Kinerehab: a kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities","authors":"Jun Huang","doi":"10.1145/2049536.2049627","DOIUrl":"https://doi.org/10.1145/2049536.2049627","url":null,"abstract":"This study used Microsoft's Kinect motion sensor to develop an intelligent rehabilitation system. Through discussion with physical therapists at the Kaohsiung County Special Education School, researchers understood that students with physical disabilities typically lack enthusiasm for rehabilitation, hindering their recovery of limb function and ability to care for themselves. Because therapists must simultaneously care for numerous students, there is also a shortage of human resources. Using fieldwork and recommendations by physical therapists, this study applied the proposed system to students with muscle atrophy and cerebral palsy, and assisted them in physical therapy. The system increased their motivation to participate in rehabilitation and enhanced the efficiency of rehab activities, greatly contributing to the recovery of muscle endurance and reducing the workload of therapists.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133798823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
An integrated system for blind day-to-day life autonomy 盲人日常生活自主集成系统
H. Fernandes, J. Faria, H. Paredes, J. Barroso
{"title":"An integrated system for blind day-to-day life autonomy","authors":"H. Fernandes, J. Faria, H. Paredes, J. Barroso","doi":"10.1145/2049536.2049579","DOIUrl":"https://doi.org/10.1145/2049536.2049579","url":null,"abstract":"The autonomy of blind people in their daily life depends on their knowledge of the surrounding world, and they are aided by keen senses and assistive devices that help them to deduce their surroundings. Existing solutions require that users carry a wide range of devices and, mostly, do not include mechanisms to ensure the autonomy of users in the event of system failure. This paper presents the nav4b system that combines guidance and navigation with object's recognition, extending traditional aids (white cane and smartphone). A working prototype was installed on the UTAD campus to perform experiments with blind users.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134482919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
MICOO (multimodal interactive cubes for object orientation): a tangible user interface for the blind and visually impaired 面向对象的多模态交互式多维数据集:为盲人和视障人士提供的有形用户界面
Muhanad S. Manshad, Enrico Pontelli, Shakir J. Manshad
{"title":"MICOO (multimodal interactive cubes for object orientation): a tangible user interface for the blind and visually impaired","authors":"Muhanad S. Manshad, Enrico Pontelli, Shakir J. Manshad","doi":"10.1145/2049536.2049597","DOIUrl":"https://doi.org/10.1145/2049536.2049597","url":null,"abstract":"This paper presents the development of Multimodal Interactive Cubes for Object Orientation (MICOO) manipulatives. This system provides a multimodal tangible user interface (TUI), enabling people with visual impairments to create, modify and naturally interact with diagrams and graphs on a multitouch surface. The system supports a novel notion of active orientation and proximity tracking of manipulatives against diagram and graph components. If the orientation of a MICOO matches a component, then a user is allowed to modify that component by moving the MICOO. Conversely, if a MICOO does not match orientation or is far from a component, audio feedback is activated to help the user reach that component. This will lessen the need for manual intervention, enable independent discovery on the part of the user, and offers dynamic behavior, whereas the representation interacts and provides feedback to the user. The platform has been developed and it is undergoing formal evaluation (e.g., browse, modify and construct graphs on a Cartesian plot and diagrams).","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133011319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Web-based sign language synthesis and animation for on-line assistive technologies 在线辅助技术的基于网络的手语合成和动画
Z. Krňoul
{"title":"Web-based sign language synthesis and animation for on-line assistive technologies","authors":"Z. Krňoul","doi":"10.1145/2049536.2049620","DOIUrl":"https://doi.org/10.1145/2049536.2049620","url":null,"abstract":"This article presents recent progress with design of sign language synthesis and avatar animation adapted for the web environment. New 3D rendering method is considered to enable transfer of avatar animation to end users. Furthermore the animation efficiency of facial expressions as part of the non-manual component is discussed. The designed web service ensures on-line accessibility and fluent animation of 3D avatar model, does not require any additional software and gives a wide range of usage for target users.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132775851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Living in a world of data 生活在数据的世界里
A. Dix
{"title":"Living in a world of data","authors":"A. Dix","doi":"10.1145/2049536.2049538","DOIUrl":"https://doi.org/10.1145/2049536.2049538","url":null,"abstract":"The web is an integral part of our daily lives, and has had profound impacts on us all, not least both positive and negative impacts on accessibility, inclusivity and social justice. However, the web is constantly changing. Web2.0 has brought the web into the heart of social life, and has had mixed impact on accessibility. More recently the rise in API access to web services and various forms of open, linked or semantic data is creating a more data/content face to the media web. As with all technology, this new data web poses fresh challenges and offers new opportunities","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116102382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Increased accessibility to nonverbal communication through facial and expression recognition technologies for blind/visually impaired subjects 通过面部和表情识别技术为盲人/视障对象增加非语言交流的可及性
D. Astler, Harrison Chau, Kailin Hsu, A. Hua, A. Kannan, Lydia Lei, Melissa Nathanson, Esmaeel Paryavi, Michelle Rosen, Hayato Unno, Carol Wang, Khadija Zaidi, Xuemin Zhang, Cha-Min Tang
{"title":"Increased accessibility to nonverbal communication through facial and expression recognition technologies for blind/visually impaired subjects","authors":"D. Astler, Harrison Chau, Kailin Hsu, A. Hua, A. Kannan, Lydia Lei, Melissa Nathanson, Esmaeel Paryavi, Michelle Rosen, Hayato Unno, Carol Wang, Khadija Zaidi, Xuemin Zhang, Cha-Min Tang","doi":"10.1145/2049536.2049596","DOIUrl":"https://doi.org/10.1145/2049536.2049596","url":null,"abstract":"Conversation between two individuals requires verbal dialogue; the majority of human communication however consists of non-verbal cues such as gestures and facial expressions. Blind individuals are thus hindered in their interaction capabilities. To address this, we are building a computer vision system with facial recognition and expression algorithms to relay nonverbal messages to a blind user. The device will communicate the identities and facial expressions of communication partners in realtime. In order to ensure that this device will be useful to the blind community, we conducted surveys and interviews and we are working with subjects to test prototypes of the device. This paper describes the algorithms and design concepts incorporated in this device, and it provides a commentary on early survey and interview results. A corresponding poster with demonstration stills is exhibited at this conference.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124876738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Smartphone haptic feedback for nonvisual wayfinding 用于非视觉寻路的智能手机触觉反馈
Shiri Azenkot, R. Ladner, J. Wobbrock
{"title":"Smartphone haptic feedback for nonvisual wayfinding","authors":"Shiri Azenkot, R. Ladner, J. Wobbrock","doi":"10.1145/2049536.2049607","DOIUrl":"https://doi.org/10.1145/2049536.2049607","url":null,"abstract":"We explore using vibration on a smartphone to provide turn-by-turn walking instructions to people with visual impairments. We present two novel feedback methods called Wand and ScreenEdge and compare them to a third method called Pattern. We built a prototype and conducted a user study where 8 participants walked along a pre-programmed route using the 3 vibration feedback methods and no audio output. Participants interpreted the feedback with an average error rate of just 4 percent. Most preferred the Pattern method, where patterns of vibrations indicate different directions, or the ScreenEdge method, where areas of the screen correspond to directions and touching them may induce vibration.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127189243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Mobile web on the desktop: simpler web browsing 桌面移动网页:更简单的网页浏览
J. Hoehl, C. Lewis
{"title":"Mobile web on the desktop: simpler web browsing","authors":"J. Hoehl, C. Lewis","doi":"10.1145/2049536.2049598","DOIUrl":"https://doi.org/10.1145/2049536.2049598","url":null,"abstract":"This paper explores the potential benefits of using mobile webpages to present simpler web content to people with cognitive disabilities. An empirical analysis revealed that the majority of popular mobile sites are smaller than their desktop equivalents with an average of half the viewable content, making them a viable method for simplifying web presentation.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"26 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128743844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Evaluating quality and comprehension of real-time sign language video on mobile phones 评估手机实时手语视频的质量和理解
Jessica J. Tran, Joy Kim, Jaehong Chon, E. Riskin, R. Ladner, J. Wobbrock
{"title":"Evaluating quality and comprehension of real-time sign language video on mobile phones","authors":"Jessica J. Tran, Joy Kim, Jaehong Chon, E. Riskin, R. Ladner, J. Wobbrock","doi":"10.1145/2049536.2049558","DOIUrl":"https://doi.org/10.1145/2049536.2049558","url":null,"abstract":"Video and image quality are often objectively measured using peak signal-to-noise ratio (PSNR), but for sign language video, human comprehension is most important. Yet the relationship of human comprehension to PSNR has not been studied. In this survey, we determine how well PSNR matches human comprehension of sign language video. We use very low bitrates (10-60 kbps) and two low spatial resolutions (192×144 and 320×240 pixels) which may be typical of video transmission on mobile phones using 3G networks. In a national online video-based user survey of 103 respondents, we found that respondents preferred the 320×240 spatial resolution transmitted at 20 kbps and higher; this does not match what PSNR results would predict. However, when comparing perceived ease/difficulty of comprehension, we found that responses did correlate well with measured PSNR. This suggests that PSNR may not be suitable for representing subjective video quality, but can be reliable as a measure for comprehensibility of American Sign Language (ASL) video. These findings are applied to our experimental mobile phone application, MobileASL, which enables real-time sign language communication for Deaf users at low bandwidths over the U.S. 3G cellular network.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125362194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信