ACM Transactions on Accessible Computing最新文献

筛选
英文 中文
Reviewing Speech Input with Audio 回顾语音输入与音频
IF 2.4
ACM Transactions on Accessible Computing Pub Date : 2020-04-21 DOI: 10.1145/3382039
HongJonggi, VaingChristine, KacorriHernisa, FindlaterLeah
{"title":"Reviewing Speech Input with Audio","authors":"HongJonggi, VaingChristine, KacorriHernisa, FindlaterLeah","doi":"10.1145/3382039","DOIUrl":"https://doi.org/10.1145/3382039","url":null,"abstract":"Speech input is a primary method of interaction for blind mobile device users, yet the process of dictating and reviewing recognized text through audio only (i.e., without access to visual feedback...","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"30 1","pages":"1-28"},"PeriodicalIF":2.4,"publicationDate":"2020-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83814943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Extraction of Emotional Information via Visual Scanning Patterns: A Feasibility Study of Participants with Schizophrenia and Neurotypical Individuals. 利用视觉扫描模式提取情绪信息:精神分裂症和神经正常个体的可行性研究。
IF 2.4
ACM Transactions on Accessible Computing Pub Date : 2018-11-01 DOI: 10.1145/3282434
Joshua Wade, Heathman S Nichols, Megan Ichinose, Dayi Bian, Esube Bekele, Matthew Snodgress, Ashwaq Zaini Amat, Eric Granholm, Sohee Park, Nilanjan Sarkar
{"title":"Extraction of Emotional Information via Visual Scanning Patterns: A Feasibility Study of Participants with Schizophrenia and Neurotypical Individuals.","authors":"Joshua Wade,&nbsp;Heathman S Nichols,&nbsp;Megan Ichinose,&nbsp;Dayi Bian,&nbsp;Esube Bekele,&nbsp;Matthew Snodgress,&nbsp;Ashwaq Zaini Amat,&nbsp;Eric Granholm,&nbsp;Sohee Park,&nbsp;Nilanjan Sarkar","doi":"10.1145/3282434","DOIUrl":"https://doi.org/10.1145/3282434","url":null,"abstract":"<p><p>Emotion recognition impairment is a core feature of schizophrenia (SZ), present throughout all stages of this condition, and leads to poor social outcome. However, the underlying mechanisms that give rise to such deficits have not been elucidated and hence, it has been difficult to develop precisely targeted interventions. Evidence supports the use of methods designed to modify patterns of visual attention in individuals with SZ in order to effect meaningful improvements in social cognition. To date, however, attention-shaping systems have not fully utilized available technology (e.g., eye tracking) to achieve this goal. The current work consisted of the design and feasibility testing of a novel gaze-sensitive social skills intervention system called MASI-VR. Adults from an outpatient clinic with confirmed SZ diagnosis (n=10) and a comparison sample of neurotypical participants (n=10) were evaluated on measures of emotion recognition and visual attention at baseline assessment, and a pilot test of the intervention system was evaluated on the SZ sample following five training sessions over three weeks. Consistent with the literature, participants in the SZ group demonstrated lower recognition of faces showing medium intensity fear, spent more time deliberating about presented emotions, and had fewer fixations in comparison to neurotypical peers. Furthermore, participants in the SZ group showed significant improvement in the recognition of fearful faces post-training. Preliminary evidence supports the feasibility of a gaze-sensitive paradigm for use in assessment and training of emotion recognition and social attention in individuals with SZ, thus warranting further evaluation of the novel intervention.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"11 4","pages":""},"PeriodicalIF":2.4,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3282434","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36849824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Improving the Accessibility of Mobile OCR Apps Via Interactive Modalities. 通过互动模式提高移动 OCR 应用程序的无障碍性。
IF 2.4
ACM Transactions on Accessible Computing Pub Date : 2017-10-01 DOI: 10.1145/3075300
Michael Cutter, Roberto Manduchi
{"title":"Improving the Accessibility of Mobile OCR Apps Via Interactive Modalities.","authors":"Michael Cutter, Roberto Manduchi","doi":"10.1145/3075300","DOIUrl":"10.1145/3075300","url":null,"abstract":"<p><p>Mobile optical character recognition (OCR) apps have come of age. Many blind individuals use them on a daily basis. The usability of such tools, however, is limited by the requirement that a good picture of the text to be read must be taken, something that is difficult to do without sight. Some mobile OCR apps already implement auto-shot and guidance mechanisms to facilitate this task. In this paper, we describe two experiments with blind participants, who tested these two interactive mechanisms on a customized iPhone implementation. These experiments bring to light a number of interesting aspects of accessing a printed document without sight, and enable a comparative analysis of the available interaction modalities.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"10 4","pages":""},"PeriodicalIF":2.4,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5736157/pdf/nihms904001.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35681895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Real-World Evaluation of Eyes-Free Yoga: An Exergame for Blind and Low-Vision Exercise. 无眼瑜伽的设计和真实世界评估:盲人和低视力者的外部游戏。
IF 2.4
ACM Transactions on Accessible Computing Pub Date : 2017-04-01 DOI: 10.1145/3022729
Kyle Rector, Roger Vilardaga, Leo Lansky, Kellie Lu, Cynthia L Bennett, Richard E Ladner, Julie A Kientz
{"title":"Design and Real-World Evaluation of Eyes-Free Yoga: An Exergame for Blind and Low-Vision Exercise.","authors":"Kyle Rector, Roger Vilardaga, Leo Lansky, Kellie Lu, Cynthia L Bennett, Richard E Ladner, Julie A Kientz","doi":"10.1145/3022729","DOIUrl":"10.1145/3022729","url":null,"abstract":"<p><p>People who are blind or low vision may have a harder time participating in exercise due to inaccessibility or lack of encouragement. To address this, we developed Eyes-Free Yoga using the Microsoft Kinect that acts as a yoga instructor and has personalized auditory feedback based on skeletal tracking. We conducted two different studies on two different versions of Eyes-Free Yoga: (1) a controlled study with 16 people who are blind or low vision to evaluate the feasibility of a proof-of-concept and (2) an 8-week in-home deployment study with 4 people who are blind or low vision, with a fully functioning exergame containing four full workouts and motivational techniques. We found that participants preferred the personalized feedback for yoga postures during the laboratory study. Therefore, the personalized feedback was used as a means to build the core components of the system used in the deployment study and was included in both study conditions. From the deployment study, we found that the participants practiced Yoga consistently throughout the 8-week period (Average hours = 17; Average days of practice = 24), almost reaching the American Heart Association recommended exercise guidelines. On average, motivational techniques increased participant's user experience and their frequency and exercise time. The findings of this work have implications for eyes-free exergame design, including engaging domain experts, piloting with inexperienced users, using musical metaphors, and designing for in-home use cases.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"9 4","pages":""},"PeriodicalIF":2.4,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5667683/pdf/nihms883315.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35226251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mind your crossings: Mining GIS imagery for crosswalk localization. 注意你的交叉路口:挖掘GIS图像的人行横道定位。
IF 2.4
ACM Transactions on Accessible Computing Pub Date : 2017-04-01 DOI: 10.1145/3046790
Dragan Ahmetovic, Roberto Manduchi, James M Coughlan, Sergio Mascetti
{"title":"Mind your crossings: Mining GIS imagery for crosswalk localization.","authors":"Dragan Ahmetovic, Roberto Manduchi, James M Coughlan, Sergio Mascetti","doi":"10.1145/3046790","DOIUrl":"10.1145/3046790","url":null,"abstract":"<p><p>For blind travelers, finding crosswalks and remaining within their borders while traversing them is a crucial part of any trip involving street crossings. While standard Orientation & Mobility (O&M) techniques allow blind travelers to safely negotiate street crossings, additional information about crosswalks and other important features at intersections would be helpful in many situations, resulting in greater safety and/or comfort during independent travel. For instance, in planning a trip a blind pedestrian may wish to be informed of the presence of all marked crossings near a desired route. We have conducted a survey of several O&M experts from the United States and Italy to determine the role that crosswalks play in travel by blind pedestrians. The results show stark differences between survey respondents from the U.S. compared with Italy: the former group emphasized the importance of following standard O&M techniques at all legal crossings (marked or unmarked), while the latter group strongly recommended crossing at marked crossings whenever possible. These contrasting opinions reflect differences in the traffic regulations of the two countries and highlight the diversity of needs that travelers in different regions may have. To address the challenges faced by blind pedestrians in negotiating street crossings, we devised a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm can be improved by a final crowdsourcing validation. To this end, we developed a Pedestrian Crossing Human Validation (PCHV) web service, which supports crowdsourcing to rule out false positives and identify false negatives.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"9 4","pages":""},"PeriodicalIF":2.4,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3046790","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35228443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Stroke therapy through motion-based games: a case study 通过基于动作的游戏进行中风治疗:一个案例研究
IF 2.4
ACM Transactions on Accessible Computing Pub Date : 2011-11-01 DOI: 10.1145/1878803.1878842
Gazihan Alankus, Rachel Proffitt, Caitlin L. Kelleher, J. Engsberg
{"title":"Stroke therapy through motion-based games: a case study","authors":"Gazihan Alankus, Rachel Proffitt, Caitlin L. Kelleher, J. Engsberg","doi":"10.1145/1878803.1878842","DOIUrl":"https://doi.org/10.1145/1878803.1878842","url":null,"abstract":"In the United States alone, more than five million people are living with long term motor impairments caused by a stroke. Video game-based therapies show promise in helping people recover lost range of motion and motor control. While researchers have demonstrated the potential utility of game-based rehabilitation through controlled studies, relatively little work has explored longer-term home-based use of therapeutic games. We conducted a six-week home study with a 62 year old woman who was seventeen years post-stroke. She played therapeutic games for approximately one hour a day, five days a week. Over the six weeks, she recovered significant motor abilities, which is unexpected given the time since her stroke. Through observations and interviews, we present lessons learned about the barriers and opportunities that arise from long-term home-based use of therapeutic games.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"93 1","pages":"3"},"PeriodicalIF":2.4,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75003493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
The Effect of Voice Output on the AAC-Supported Conversations of Persons with Alzheimer's Disease. 语音输出对阿尔茨海默病患者aac支持对话的影响
IF 2.4
ACM Transactions on Accessible Computing Pub Date : 2009-03-01 DOI: 10.1145/1497302.1497305
Melanie Fried-Oken, Charity Rowland, Glory Baker, Mayling Dixon, Carolyn Mills, Darlene Schultz, Barry Oken
{"title":"The Effect of Voice Output on the AAC-Supported Conversations of Persons with Alzheimer's Disease.","authors":"Melanie Fried-Oken,&nbsp;Charity Rowland,&nbsp;Glory Baker,&nbsp;Mayling Dixon,&nbsp;Carolyn Mills,&nbsp;Darlene Schultz,&nbsp;Barry Oken","doi":"10.1145/1497302.1497305","DOIUrl":"https://doi.org/10.1145/1497302.1497305","url":null,"abstract":"<p><p>The purpose of this study was to determine whether the presence or absence of digitized 1-2 word voice output on a direct selection, customized augmentative and alternative communication (AAC) device would affect the impoverished conversations of persons with dementia. Thirty adults with moderate Alzheimer's disease participated in two personally relevant conversations with an AAC device. For 12 of the participants the AAC device included voice output. The AAC device was the Flexiboard™ containing 16 messages needed to discuss a favorite autobiographical topic chosen by the participant and his/her family caregivers. Ten-minute conversations were videotaped in participants' residences and analyzed for four conversational measures related to the participants' communicative behavior. Results show that AAC devices with digitized voice output depress conversational performance and distract participants with moderate Alzheimer's disease as compared to similar devices without voice output. There were significantly more 1-word utterances and fewer total utterances when AAC devices included voice output, and the rate of topic elaborations/initiations was significantly lower when voice output was present. Discussion about the novelty of voice output for this population of elders and the need to train elders to use this technology is provided.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"1 3","pages":"15"},"PeriodicalIF":2.4,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/1497302.1497305","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30028671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信