Sukru Eraslan, Y. Yeşilada, Victoria Yaneva, S. Harper
{"title":"Autism detection based on eye movement sequences on the web: a scanpath trend analysis approach","authors":"Sukru Eraslan, Y. Yeşilada, Victoria Yaneva, S. Harper","doi":"10.1145/3371300.3383340","DOIUrl":"https://doi.org/10.1145/3371300.3383340","url":null,"abstract":"Autism diagnostic procedure is a subjective, challenging and expensive procedure and relies on behavioral, historical and parental report information. In our previous, we proposed a machine learning classifier to be used as a potential screening tool or used in conjunction with other diagnostic methods, thus aiding established diagnostic methods. The classifier uses eye movements of people on web pages but it only considers non-sequential data. It achieves the best accuracy by combining data from several web pages and it has varying levels of accuracy on different web pages. In this present paper, we investigate whether it is possible to detect autism based on eye-movement sequences and achieve stable accuracy across different web pages to be not dependent on specific web pages. We used Scanpath Trend Analysis (STA) which is designed for identifying a trending path of a group of users on a web page based on their eye movements. We first identify trending paths of people with autism and neurotypical people. To detect whether or not a person has autism, we calculate the similarity of his/her path to the trending paths of people with autism and neurotypical people. If the path is more similar to the trending path of neurotypical people, we classify the person as a neurotypical person. Otherwise, we classify her/him as a person with autism. We systematically evaluate our approach with an eye-tracking dataset of 15 verbal and highly-independent people with autism and 15 neurotypical people on six web pages. Our evaluation shows that the STA approach performs better on individual web pages and provides more stable accuracy across different pages.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89594515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Sorge, Akashdeep Bansal, Neha Jadhav, Himanshu Garg, Ayushi Verma, M. Balakrishnan
{"title":"Towards generating web-accessible STEM documents from PDF","authors":"V. Sorge, Akashdeep Bansal, Neha Jadhav, Himanshu Garg, Ayushi Verma, M. Balakrishnan","doi":"10.1145/3371300.3383351","DOIUrl":"https://doi.org/10.1145/3371300.3383351","url":null,"abstract":"PDF is still a very popular format that is widely used to exchange and archive electronic documents. And although considerable efforts have been made to ensure accessibility of PDF documents, they are still far from ideal when complex content like formulas, diagrams or tables is present. Unfortunately, many publications in scientific subjects are available in PDF format only and are therefore, if at all, only partially accessible. In this paper, we present a fully automated web-based technology to convert PDF documents into an accessible single file format. We concentrate on presenting working solutions for mathematical formulas and tables while also discussing some of the open problems in this context and how we aim to solve them in the future.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85726776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Game changer: accessible audio and tactile guidance for board and card games","authors":"Gabriella M. Johnson, Shaun K. Kane","doi":"10.1145/3371300.3383347","DOIUrl":"https://doi.org/10.1145/3371300.3383347","url":null,"abstract":"While board games are a popular social activity, their reliance on visual information can create accessibility problems for blind and visually impaired players. Because some players cannot easily read cards or locate pieces, they may be at a disadvantage or may be unable to play a game without sighted help. We present Game Changer, an augmented workspace that provides both audio descriptions and tactile additions to make the state of the board game accessible to blind and visually impaired players. In this paper, we describe the design of Game Changer and present findings from a user study in which seven blind participants used Game Changer to play against a sighted partner. Most players stated the game was more accessible with the additions from Game Changer and felt that Game Changer could be used to augment other games.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72946850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Foad Hamidi, Kellie Poneres, Aaron K. Massey, A. Hurst
{"title":"Using a participatory activities toolkit to elicit privacy expectations of adaptive assistive technologies","authors":"Foad Hamidi, Kellie Poneres, Aaron K. Massey, A. Hurst","doi":"10.1145/3371300.3383336","DOIUrl":"https://doi.org/10.1145/3371300.3383336","url":null,"abstract":"Individuals whose abilities change over time can benefit from assistive technologies that can detect and adapt to their current needs. While these Adaptive Assistive Technologies (AATs) offer exciting opportunities, their use presents an often-overlooked privacy tradeoff between usability and disclosing ability data. To explore this tradeoff from end-user perspectives, we developed a participatory activities toolkit comprised of tangible low-fidelity physical cards, charts, and two software AAT prototypes. We used the kit in interviews with six older adults who experience pointing and typing difficulties when accessing the Internet. Participants had conflicting views about AATs collecting their data, and strong preferences about what data should be collected, how should it be used, and who should have access to it. The contributions of this paper are twofold: (1) we describe a novel approach to elicit detailed end-user privacy preferences and expectations, and (2) we provide insights from representative users of AATs towards their privacy.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"126 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76676282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deaf and hard-of-hearing users' prioritization of genres of online video content requiring accurate captions","authors":"Larwan Berke, Matthew Seita, Matt Huenerfauth","doi":"10.1145/3371300.3383337","DOIUrl":"https://doi.org/10.1145/3371300.3383337","url":null,"abstract":"Online video is an important information source, yet its pace of growth, including user-submitted content, is so rapid that automatic captioning technologies are needed to make content accessible for people who are Deaf or Hard-of-Hearing (DHH). To support future creation of a research dataset of online videos, we must prioritize which genres of online video content DHH users believe are of greatest importance to be accurately captioned. Our first contribution is to validate that the Best-Worst Scaling (BWS) methodology is able to accurately gather judgments on this topic by conducting an in-person study with 25 DHH users, using a card-sorting methodology to rank the importance for various YouTube genres of online video to be accurately captioned. Our second contribution is to identify video genres of highest captioning importance via an online survey with 151 DHH individuals, and those participants highly ranked: News and Politics, Education, and Technology and Science.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"74 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85422725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Lister, Tim Coughlan, Francisco Iniesto, N. Freear, P. Devine
{"title":"Accessible conversational user interfaces: considerations for design","authors":"K. Lister, Tim Coughlan, Francisco Iniesto, N. Freear, P. Devine","doi":"10.1145/3371300.3383343","DOIUrl":"https://doi.org/10.1145/3371300.3383343","url":null,"abstract":"Conversational user interfaces (CUIs), such as chatbots and voice assistants, are increasingly common in areas of day-to-day life, and can be expected to become ever more pervasive in the future. These interfaces are being designed for ever more complex interactions, and they appear to have potential to be beneficial to people with disabilities to interact through the web and with technologies embedded in the environment. However, to fulfil this promise they need to be designed to be accessible. This paper reviews a range of current guidance, reports, research and literature on accessible design for different disability groups, including users with mental health issues, autism, health conditions, cognitive disabilities, dyslexia or learning difficulties, and sensory, mobility or dexterity impairments. We collate the elements from this body of guidance that appear relevant to the design of accessible CUIs, and instances where guidance presents issues which are less conclusive, and require further exploration. Using this, we develop a set of questions which could be useful in the further research and development of accessible CUIs. We conclude by considering why CUIs could present opportunities for furthering accessibility, by introducing an example of this potential - a project to design an assistant to support students to disclose their disabilities and organise support, without the need to fill in forms.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90542602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tables on the web accessible?: unfortunately not!","authors":"Waqar Haider, Y. Yeşilada","doi":"10.1145/3371300.3383349","DOIUrl":"https://doi.org/10.1145/3371300.3383349","url":null,"abstract":"Web accessibility guidelines, in particular, WCAG (Web Content Accessibility Guidelines), covers a wide range of recommendations for making web content more accessible. They have technical guidance on making certain structures accessible such as tables. Even though there are many studies that investigate the accessibility of certain types of web sites or web sites from certain countries, to our knowledge, there is no specific study that looks at the accessibility of tables on the web. In this paper, we present a systematic study that analyzes the accessibility of more than 16,000 table elements, crawled from more than 900 different web pages. This study shows that tables are still widely used for layout, and the guidelines related to data tables in WCAG are not followed. Our research is vital in demonstrating the need for smart systems that automatically handle the accessibility of structures such as tables.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90055620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan
{"title":"A Saliency-driven Video Magnifier for People with Low Vision.","authors":"Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan","doi":"10.1145/3371300.3383356","DOIUrl":"https://doi.org/10.1145/3371300.3383356","url":null,"abstract":"<p><p>Consuming video content poses significant challenges for many screen magnifier users, which is the \"go to\" assistive technology for people with low vision. While screen magnifier software could be used to achieve a zoom factor that would make the content of the video visible to low-vision users, it is oftentimes a major challenge for these users to navigate through videos. Towards making videos more accessible for low-vision users, we have developed the SViM video magnifier system [6]. Specifically, SViM consists of three different magnifier interfaces with easy-to-use means of interactions. All three interfaces are driven by visual saliency as a guided signal, which provides a quantification of interestingness at the pixel-level. Saliency information, which is provided as a heatmap is then processed to obtain distinct regions of interest. These regions of interests are tracked over time and displayed using an easy-to-use interface. We present a description of our overall design and interfaces.</p>","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3371300.3383356","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39266880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Indoor Localization for Visually Impaired Travelers Using Computer Vision on a Smartphone.","authors":"Giovanni Fusco, James M Coughlan","doi":"10.1145/3371300.3383345","DOIUrl":"10.1145/3371300.3383345","url":null,"abstract":"<p><p>Wayfinding is a major challenge for visually impaired travelers, who generally lack access to visual cues such as landmarks and informational signs that many travelers rely on for navigation. Indoor wayfinding is particularly challenging since the most commonly used source of location information for wayfinding, GPS, is inaccurate indoors. We describe a computer vision approach to indoor localization that runs as a real-time app on a conventional smartphone, which is intended to support a full-featured wayfinding app in the future that will include turn-by-turn directions. Our approach combines computer vision, existing informational signs such as Exit signs, inertial sensors and a 2D map to estimate and track the user's location in the environment. An important feature of our approach is that it requires no new physical infrastructure. While our approach requires the user to either hold the smartphone or wear it (e.g., on a lanyard) with the camera facing forward while walking, it has the advantage of not forcing the user to aim the camera towards specific signs, which would be challenging for people with low or no vision. We demonstrate the feasibility of our approach with five blind travelers navigating an indoor space, with localization accuracy of roughly 1 meter once the localization algorithm has converged.</p>","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7643919/pdf/nihms-1611172.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38583060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 17th International Web for All Conference","authors":"","doi":"10.1145/3371300","DOIUrl":"https://doi.org/10.1145/3371300","url":null,"abstract":"","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76400134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}