Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility最新文献

筛选
英文 中文
Social Haptic Communication mimicked with vibrotactile patterns - an evaluation by users with deafblindness 用振动触觉模式模拟社会触觉交流——聋哑用户的评估
M. Plaisier, A. Kappers
{"title":"Social Haptic Communication mimicked with vibrotactile patterns - an evaluation by users with deafblindness","authors":"M. Plaisier, A. Kappers","doi":"10.1145/3441852.3476528","DOIUrl":"https://doi.org/10.1145/3441852.3476528","url":null,"abstract":"Many devices, such as smart phones, implement vibration motors for tactile feedback. When multiple vibration motors are placed on, for instance, the backrest of a chair it is possible to trace shapes on the back of a person by sequentially switching motors on and off. Social Haptic Communication (SHC) is a tactile mode of communication for persons with deafblindness that makes use of tracing shapes or other types of spatiotemporal patterns with the hand on the back of another person. This could be emulated using vibrotactile patterns. Here we investigated whether SHC users with deafblindness would recognize the vibrotactile patterns as SHC signs (Haptices). In several cases the participants immediately linked a vibrotactile patterns to the Haptice that is was meant to imitate. Together with the participants we improved and expanded the set of vibrotactile patterns.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"605 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116378311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CollabAlly: Accessible Collaboration Awareness in Document Editing 协作:文档编辑中可访问的协作意识
Cheuk Yin Phipson Lee, Zhuohao Zhang, Jaylin Herskovitz, Jooyoung Seo, Anhong Guo
{"title":"CollabAlly: Accessible Collaboration Awareness in Document Editing","authors":"Cheuk Yin Phipson Lee, Zhuohao Zhang, Jaylin Herskovitz, Jooyoung Seo, Anhong Guo","doi":"10.1145/3441852.3476562","DOIUrl":"https://doi.org/10.1145/3441852.3476562","url":null,"abstract":"Collaborative document editing tools are widely used in both professional and academic workplaces. While these tools provide some accessibility features, it is still challenging for blind users to gain collaboration awareness that sighted people can easily obtain using visual cues (e.g., who edited or commented where and what in the document). To address this gap, we present CollabAlly, a browser extension that makes extractable collaborative and contextual information in document editing accessible for blind users. With CollabAlly, blind users can easily access collaborators’ information, track real-time or asynchronous content and comment changes, and navigate through these elements. In order to convey this complex information through audio, CollabAlly uses voice fonts and spatial audio to enhance users’ collaboration awareness in shared documents. Through a series of pilot studies with a coauthor who is blind, CollabAlly’s design was refined to include more information and to be more compatible with existing screen readers.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125476147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Understanding Barriers and Design Opportunities to Improve Healthcare and QOL for Older Adults through Voice Assistants 了解障碍和设计机会,通过语音助手改善老年人的医疗保健和生活质量
Chen Chen, Janet G. Johnson, Kemeberly Charles, Alice Lee, Ella T. Lifset, M. Hogarth, A. Moore, E. Farcas, Nadir Weibel
{"title":"Understanding Barriers and Design Opportunities to Improve Healthcare and QOL for Older Adults through Voice Assistants","authors":"Chen Chen, Janet G. Johnson, Kemeberly Charles, Alice Lee, Ella T. Lifset, M. Hogarth, A. Moore, E. Farcas, Nadir Weibel","doi":"10.1145/3441852.3471218","DOIUrl":"https://doi.org/10.1145/3441852.3471218","url":null,"abstract":"Voice-based Intelligent Virtual Assistants (IVAs) promise to improve healthcare management and Quality of Life (QOL) by introducing the paradigm of hands-free and eye-free interactions. However, there has been little understanding regarding the challenges for designing such systems for older adults, especially when it comes to healthcare related tasks. To tackle this, we consider the processes of care delivery and QOL enhancements for older adults as a collaborative task between patients and providers. By interviewing 16 older adults living independently or semi–independently and 5 providers, we identified 12 barriers that older adults might encounter during daily routine and while managing health. We ultimately highlighted key design challenges and opportunities that might be introduced when integrating voice-based IVAs into the life of older adults. Our work will benefit practitioners who study and attempt to create full-fledged IVA-powered smart devices to deliver better care and support an increased QOL for aging populations.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124349666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Wearable Interactions for Users with Motor Impairments: Systematic Review, Inventory, and Research Implications 运动障碍用户的可穿戴交互:系统回顾,库存和研究意义
Alexandru-Ionuț Șiean, Radu-Daniel Vatavu
{"title":"Wearable Interactions for Users with Motor Impairments: Systematic Review, Inventory, and Research Implications","authors":"Alexandru-Ionuț Șiean, Radu-Daniel Vatavu","doi":"10.1145/3441852.3471212","DOIUrl":"https://doi.org/10.1145/3441852.3471212","url":null,"abstract":"We conduct a systematic literature review on wearable interactions for users with motor impairments and report results from a meta-analysis of 57 scientific articles identified in the ACM DL and IEEE Xplore databases. Our findings show limited research conducted on accessible wearable interactions (e.g., just four papers addressing smartwatch input), a disproportionate interest for hand gestures compared to other input modalities for wearable devices, and low numbers of participants with motor impairments involved in user studies about wearable interactions (a median of 6.0 and average of 8.2 participants per study). We compile an inventory of 92 finger, hand, head, shoulder, eye gaze, and foot gesture commands for smartwatches, smartglasses, headsets, earsets, fitness trackers, data gloves, and armband wearable devices extracted from the scientific literature that we surveyed. Based on our findings, we propose four directions for future research on accessible wearable interactions for users with motor impairments.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116666267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Fluent: An AI Augmented Writing Tool for People who Stutter 流利:一个为口吃者提供的人工智能增强写作工具
Bhavya Ghai, Klaus Mueller
{"title":"Fluent: An AI Augmented Writing Tool for People who Stutter","authors":"Bhavya Ghai, Klaus Mueller","doi":"10.1145/3441852.3471211","DOIUrl":"https://doi.org/10.1145/3441852.3471211","url":null,"abstract":"Stuttering is a speech disorder which impacts the personal and professional lives of millions of people worldwide. To save themselves from stigma and discrimination, people who stutter (PWS) may adopt different strategies to conceal their stuttering. One of the common strategies is word substitution where an individual avoids saying a word they might stutter on and use an alternative instead. This process itself can cause stress and add more burden. In this work, we present Fluent, an AI augmented writing tool which assists PWS in writing scripts which they can speak more fluently. Fluent embodies a novel active learning based method of identifying words an individual might struggle pronouncing. Such words are highlighted in the interface. On hovering over any such word, Fluent presents a set of alternative words which have similar meaning but are easier to speak. The user is free to accept or ignore these suggestions. Based on such user interaction (feedback), Fluent continuously evolves its classifier to better suit the personalized needs of each user. We evaluated our tool by measuring its ability to identify difficult words for 10 simulated users. We found that our tool can identify difficult words with a mean accuracy of over 80% in under 20 interactions and it keeps improving with more feedback. Our tool can be beneficial for certain important life situations like giving a talk, presentation, etc. The source code for this tool has been made publicly accessible at github.com/bhavyaghai/Fluent.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121952052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Non-Visual Cooking: Exploring Practices and Challenges of Meal Preparation by People with Visual Impairments 非视觉烹饪:探索有视觉障碍的人准备食物的实践和挑战
Franklin Mingzhe Li, Jamie Dorst, Peter Cederberg, Patrick Carrington
{"title":"Non-Visual Cooking: Exploring Practices and Challenges of Meal Preparation by People with Visual Impairments","authors":"Franklin Mingzhe Li, Jamie Dorst, Peter Cederberg, Patrick Carrington","doi":"10.1145/3441852.3471215","DOIUrl":"https://doi.org/10.1145/3441852.3471215","url":null,"abstract":"The reliance on vision for tasks related to cooking and eating healthy can present barriers to cooking for oneself and achieving proper nutrition. There has been little research exploring cooking practices and challenges faced by people with visual impairments. We present a content analysis of 122 YouTube videos to highlight the cooking practices of visually impaired people, and we describe detailed practices for 12 different cooking activities (e.g., cutting and chopping, measuring, testing food for doneness). Based on the cooking practices, we also conducted semi-structured interviews with 12 visually impaired people who have cooking experience and show existing challenges, concerns, and risks in cooking (e.g., tracking the status of tasks in progress, verifying whether things are peeled or cleaned thoroughly). We further discuss opportunities to support the current practices and improve the independence of people with visual impairments in cooking (e.g., zero-touch interactions for cooking). Overall, our findings provide guidance for future research exploring various assistive technologies to help people cook without relying on vision.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126956530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
See-Through Captions: Real-Time Captioning on Transparent Display for Deaf and Hard-of-Hearing People 透明字幕:为聋人和听力障碍者提供透明显示的实时字幕
Kenta Yamamoto, Ippei Suzuki, Akihisa Shitara, Yoichi Ochiai
{"title":"See-Through Captions: Real-Time Captioning on Transparent Display for Deaf and Hard-of-Hearing People","authors":"Kenta Yamamoto, Ippei Suzuki, Akihisa Shitara, Yoichi Ochiai","doi":"10.1145/3441852.3476551","DOIUrl":"https://doi.org/10.1145/3441852.3476551","url":null,"abstract":"Real-time captioning is a useful technique for deaf and hard-of-hearing (DHH) people to talk to hearing people. With the improvement in device performance and the accuracy of automatic speech recognition (ASR), real-time captioning is becoming an important tool for helping DHH people in their daily lives. To realize higher-quality communication and overcome the limitations of mobile and augmented-reality devices, real-time captioning that can be used comfortably while maintaining nonverbal communication and preventing incorrect recognition is required. Therefore, we propose a real-time captioning system that uses a transparent display. In this system, the captions are presented on both sides of the display to address the problem of incorrect ASR results, and the highly transparent display makes it possible to see both the body language and the captions.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131133454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Developing Accessible Mobile Applications with Cross-Platform Development Frameworks 使用跨平台开发框架开发可访问的移动应用程序
S. Mascetti, Mattia Ducci, Niccolò Cantù, Paolo Pecis, D. Ahmetovic
{"title":"Developing Accessible Mobile Applications with Cross-Platform Development Frameworks","authors":"S. Mascetti, Mattia Ducci, Niccolò Cantù, Paolo Pecis, D. Ahmetovic","doi":"10.1145/3441852.3476469","DOIUrl":"https://doi.org/10.1145/3441852.3476469","url":null,"abstract":"We illustrate our experience, gained over years of involvement in multiple research and commercial projects, in developing accessible mobile apps with cross-platform development frameworks (CPDF). These frameworks allow the developers to write the app code only once and run it on both iOS and Android. However, they have limited support for accessibility features, in particular for what concerns the interaction with the system screen reader. To study the coverage of accessibility features in CPDFs, we first systematically analyze screen reader APIs available in native iOS and Android, and we examine whether and at what level the same functionalities are available in two popular CPDF: Xamarin and React Native. This analysis unveils that there are many functionalities shared between native iOS and Android APIs, but most of them are not available neither in React Native nor in Xamarin. In particular, not even all basic APIs are exposed by the examined CPDF. Accessing the unavailable APIs is still possible, but it requires additional effort by the developers who need to write platform-specific code in native APIs, hence partially negating the advantages of CPDF. To address this problem, we consider a representative set of native APIs that cannot be directly accessed from React Native and Xamarin and we report challenges encountered in accessing them.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131128532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信