Proceedings of the 17th International Web for All Conference最新文献

筛选
英文 中文
Autism detection based on eye movement sequences on the web: a scanpath trend analysis approach 基于网络眼动序列的自闭症检测:一种扫描路径趋势分析方法
Proceedings of the 17th International Web for All Conference Pub Date : 2020-04-20 DOI: 10.1145/3371300.3383340
Sukru Eraslan, Y. Yeşilada, Victoria Yaneva, S. Harper
{"title":"Autism detection based on eye movement sequences on the web: a scanpath trend analysis approach","authors":"Sukru Eraslan, Y. Yeşilada, Victoria Yaneva, S. Harper","doi":"10.1145/3371300.3383340","DOIUrl":"https://doi.org/10.1145/3371300.3383340","url":null,"abstract":"Autism diagnostic procedure is a subjective, challenging and expensive procedure and relies on behavioral, historical and parental report information. In our previous, we proposed a machine learning classifier to be used as a potential screening tool or used in conjunction with other diagnostic methods, thus aiding established diagnostic methods. The classifier uses eye movements of people on web pages but it only considers non-sequential data. It achieves the best accuracy by combining data from several web pages and it has varying levels of accuracy on different web pages. In this present paper, we investigate whether it is possible to detect autism based on eye-movement sequences and achieve stable accuracy across different web pages to be not dependent on specific web pages. We used Scanpath Trend Analysis (STA) which is designed for identifying a trending path of a group of users on a web page based on their eye movements. We first identify trending paths of people with autism and neurotypical people. To detect whether or not a person has autism, we calculate the similarity of his/her path to the trending paths of people with autism and neurotypical people. If the path is more similar to the trending path of neurotypical people, we classify the person as a neurotypical person. Otherwise, we classify her/him as a person with autism. We systematically evaluate our approach with an eye-tracking dataset of 15 verbal and highly-independent people with autism and 15 neurotypical people on six web pages. Our evaluation shows that the STA approach performs better on individual web pages and provides more stable accuracy across different pages.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89594515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Towards generating web-accessible STEM documents from PDF 从PDF生成可网络访问的STEM文档
Proceedings of the 17th International Web for All Conference Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383351
V. Sorge, Akashdeep Bansal, Neha Jadhav, Himanshu Garg, Ayushi Verma, M. Balakrishnan
{"title":"Towards generating web-accessible STEM documents from PDF","authors":"V. Sorge, Akashdeep Bansal, Neha Jadhav, Himanshu Garg, Ayushi Verma, M. Balakrishnan","doi":"10.1145/3371300.3383351","DOIUrl":"https://doi.org/10.1145/3371300.3383351","url":null,"abstract":"PDF is still a very popular format that is widely used to exchange and archive electronic documents. And although considerable efforts have been made to ensure accessibility of PDF documents, they are still far from ideal when complex content like formulas, diagrams or tables is present. Unfortunately, many publications in scientific subjects are available in PDF format only and are therefore, if at all, only partially accessible. In this paper, we present a fully automated web-based technology to convert PDF documents into an accessible single file format. We concentrate on presenting working solutions for mathematical formulas and tables while also discussing some of the open problems in this context and how we aim to solve them in the future.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85726776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Game changer: accessible audio and tactile guidance for board and card games 游戏改变者:可访问的音频和触觉指导棋盘和纸牌游戏
Proceedings of the 17th International Web for All Conference Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383347
Gabriella M. Johnson, Shaun K. Kane
{"title":"Game changer: accessible audio and tactile guidance for board and card games","authors":"Gabriella M. Johnson, Shaun K. Kane","doi":"10.1145/3371300.3383347","DOIUrl":"https://doi.org/10.1145/3371300.3383347","url":null,"abstract":"While board games are a popular social activity, their reliance on visual information can create accessibility problems for blind and visually impaired players. Because some players cannot easily read cards or locate pieces, they may be at a disadvantage or may be unable to play a game without sighted help. We present Game Changer, an augmented workspace that provides both audio descriptions and tactile additions to make the state of the board game accessible to blind and visually impaired players. In this paper, we describe the design of Game Changer and present findings from a user study in which seven blind participants used Game Changer to play against a sighted partner. Most players stated the game was more accessible with the additions from Game Changer and felt that Game Changer could be used to augment other games.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72946850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using a participatory activities toolkit to elicit privacy expectations of adaptive assistive technologies 使用参与性活动工具包来引出自适应辅助技术的隐私期望
Proceedings of the 17th International Web for All Conference Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383336
Foad Hamidi, Kellie Poneres, Aaron K. Massey, A. Hurst
{"title":"Using a participatory activities toolkit to elicit privacy expectations of adaptive assistive technologies","authors":"Foad Hamidi, Kellie Poneres, Aaron K. Massey, A. Hurst","doi":"10.1145/3371300.3383336","DOIUrl":"https://doi.org/10.1145/3371300.3383336","url":null,"abstract":"Individuals whose abilities change over time can benefit from assistive technologies that can detect and adapt to their current needs. While these Adaptive Assistive Technologies (AATs) offer exciting opportunities, their use presents an often-overlooked privacy tradeoff between usability and disclosing ability data. To explore this tradeoff from end-user perspectives, we developed a participatory activities toolkit comprised of tangible low-fidelity physical cards, charts, and two software AAT prototypes. We used the kit in interviews with six older adults who experience pointing and typing difficulties when accessing the Internet. Participants had conflicting views about AATs collecting their data, and strong preferences about what data should be collected, how should it be used, and who should have access to it. The contributions of this paper are twofold: (1) we describe a novel approach to elicit detailed end-user privacy preferences and expectations, and (2) we provide insights from representative users of AATs towards their privacy.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"126 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76676282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Deaf and hard-of-hearing users' prioritization of genres of online video content requiring accurate captions 聋人和听力障碍用户对需要准确字幕的在线视频内容类型的优先级
Proceedings of the 17th International Web for All Conference Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383337
Larwan Berke, Matthew Seita, Matt Huenerfauth
{"title":"Deaf and hard-of-hearing users' prioritization of genres of online video content requiring accurate captions","authors":"Larwan Berke, Matthew Seita, Matt Huenerfauth","doi":"10.1145/3371300.3383337","DOIUrl":"https://doi.org/10.1145/3371300.3383337","url":null,"abstract":"Online video is an important information source, yet its pace of growth, including user-submitted content, is so rapid that automatic captioning technologies are needed to make content accessible for people who are Deaf or Hard-of-Hearing (DHH). To support future creation of a research dataset of online videos, we must prioritize which genres of online video content DHH users believe are of greatest importance to be accurately captioned. Our first contribution is to validate that the Best-Worst Scaling (BWS) methodology is able to accurately gather judgments on this topic by conducting an in-person study with 25 DHH users, using a card-sorting methodology to rank the importance for various YouTube genres of online video to be accurately captioned. Our second contribution is to identify video genres of highest captioning importance via an online survey with 151 DHH individuals, and those participants highly ranked: News and Politics, Education, and Technology and Science.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"74 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85422725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Accessible conversational user interfaces: considerations for design 可访问的会话用户界面:设计注意事项
Proceedings of the 17th International Web for All Conference Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383343
K. Lister, Tim Coughlan, Francisco Iniesto, N. Freear, P. Devine
{"title":"Accessible conversational user interfaces: considerations for design","authors":"K. Lister, Tim Coughlan, Francisco Iniesto, N. Freear, P. Devine","doi":"10.1145/3371300.3383343","DOIUrl":"https://doi.org/10.1145/3371300.3383343","url":null,"abstract":"Conversational user interfaces (CUIs), such as chatbots and voice assistants, are increasingly common in areas of day-to-day life, and can be expected to become ever more pervasive in the future. These interfaces are being designed for ever more complex interactions, and they appear to have potential to be beneficial to people with disabilities to interact through the web and with technologies embedded in the environment. However, to fulfil this promise they need to be designed to be accessible. This paper reviews a range of current guidance, reports, research and literature on accessible design for different disability groups, including users with mental health issues, autism, health conditions, cognitive disabilities, dyslexia or learning difficulties, and sensory, mobility or dexterity impairments. We collate the elements from this body of guidance that appear relevant to the design of accessible CUIs, and instances where guidance presents issues which are less conclusive, and require further exploration. Using this, we develop a set of questions which could be useful in the further research and development of accessible CUIs. We conclude by considering why CUIs could present opportunities for furthering accessibility, by introducing an example of this potential - a project to design an assistant to support students to disclose their disabilities and organise support, without the need to fill in forms.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90542602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Tables on the web accessible?: unfortunately not! 表格在网络上可访问?不幸的是没有!
Proceedings of the 17th International Web for All Conference Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383349
Waqar Haider, Y. Yeşilada
{"title":"Tables on the web accessible?: unfortunately not!","authors":"Waqar Haider, Y. Yeşilada","doi":"10.1145/3371300.3383349","DOIUrl":"https://doi.org/10.1145/3371300.3383349","url":null,"abstract":"Web accessibility guidelines, in particular, WCAG (Web Content Accessibility Guidelines), covers a wide range of recommendations for making web content more accessible. They have technical guidance on making certain structures accessible such as tables. Even though there are many studies that investigate the accessibility of certain types of web sites or web sites from certain countries, to our knowledge, there is no specific study that looks at the accessibility of tables on the web. In this paper, we present a systematic study that analyzes the accessibility of more than 16,000 table elements, crawled from more than 900 different web pages. This study shows that tables are still widely used for layout, and the guidelines related to data tables in WCAG are not followed. Our research is vital in demonstrating the need for smart systems that automatically handle the accessibility of structures such as tables.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90055620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Saliency-driven Video Magnifier for People with Low Vision. 专为低视力人士设计的显著性视频放大镜。
Proceedings of the 17th International Web for All Conference Pub Date : 2020-04-01 Epub Date: 2020-04-20 DOI: 10.1145/3371300.3383356
Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan
{"title":"A Saliency-driven Video Magnifier for People with Low Vision.","authors":"Ali Selman Aydin,&nbsp;Shirin Feiz,&nbsp;Vikas Ashok,&nbsp;I V Ramakrishnan","doi":"10.1145/3371300.3383356","DOIUrl":"https://doi.org/10.1145/3371300.3383356","url":null,"abstract":"<p><p>Consuming video content poses significant challenges for many screen magnifier users, which is the \"go to\" assistive technology for people with low vision. While screen magnifier software could be used to achieve a zoom factor that would make the content of the video visible to low-vision users, it is oftentimes a major challenge for these users to navigate through videos. Towards making videos more accessible for low-vision users, we have developed the SViM video magnifier system [6]. Specifically, SViM consists of three different magnifier interfaces with easy-to-use means of interactions. All three interfaces are driven by visual saliency as a guided signal, which provides a quantification of interestingness at the pixel-level. Saliency information, which is provided as a heatmap is then processed to obtain distinct regions of interest. These regions of interests are tracked over time and displayed using an easy-to-use interface. We present a description of our overall design and interfaces.</p>","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3371300.3383356","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39266880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Indoor Localization for Visually Impaired Travelers Using Computer Vision on a Smartphone. 利用智能手机上的计算机视觉为视力受损的旅行者进行室内定位。
Proceedings of the 17th International Web for All Conference Pub Date : 2020-04-01 DOI: 10.1145/3371300.3383345
Giovanni Fusco, James M Coughlan
{"title":"Indoor Localization for Visually Impaired Travelers Using Computer Vision on a Smartphone.","authors":"Giovanni Fusco, James M Coughlan","doi":"10.1145/3371300.3383345","DOIUrl":"10.1145/3371300.3383345","url":null,"abstract":"<p><p>Wayfinding is a major challenge for visually impaired travelers, who generally lack access to visual cues such as landmarks and informational signs that many travelers rely on for navigation. Indoor wayfinding is particularly challenging since the most commonly used source of location information for wayfinding, GPS, is inaccurate indoors. We describe a computer vision approach to indoor localization that runs as a real-time app on a conventional smartphone, which is intended to support a full-featured wayfinding app in the future that will include turn-by-turn directions. Our approach combines computer vision, existing informational signs such as Exit signs, inertial sensors and a 2D map to estimate and track the user's location in the environment. An important feature of our approach is that it requires no new physical infrastructure. While our approach requires the user to either hold the smartphone or wear it (e.g., on a lanyard) with the camera facing forward while walking, it has the advantage of not forcing the user to aim the camera towards specific signs, which would be challenging for people with low or no vision. We demonstrate the feasibility of our approach with five blind travelers navigating an indoor space, with localization accuracy of roughly 1 meter once the localization algorithm has converged.</p>","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7643919/pdf/nihms-1611172.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38583060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proceedings of the 17th International Web for All Conference 第17届国际网络会议论文集
Proceedings of the 17th International Web for All Conference Pub Date : 2020-01-01 DOI: 10.1145/3371300
{"title":"Proceedings of the 17th International Web for All Conference","authors":"","doi":"10.1145/3371300","DOIUrl":"https://doi.org/10.1145/3371300","url":null,"abstract":"","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76400134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信