{"title":"An Open-source Tool for Simplifying Computer and Assistive Technology Use: Tool for simplification and auto-personalization of computers and assistive technologies.","authors":"Gregg C Vanderheiden, J Bern Jordan","doi":"10.1145/3441852.3476554","DOIUrl":"10.1145/3441852.3476554","url":null,"abstract":"<p><p>Computer access is increasingly critical for all aspects of life from education to employment to daily living, health and almost all types of participation. The pandemic has highlighted our dependence on technology, but the dependence existed before and is continuing after. Yet many face barriers due to disability, literacy, or digital literacy. Although the problems faced by individuals with disabilities have received focus for some time, the problems faced by people who just have difficulty in using technologies has not, but is a second large, yet less understood problem. Solutions exist but are often not installed, buried, hard to find, and difficult to understand and use. To address these problems, an open-source extension to the Windows and macOS operating systems has been under exploration and development by an international consortium of organizations, companies, and individuals. It combines auto-personalization, layering, and enhanced discovery, with the ability to Install on Demand (IoD) any assistive technologies a user needs. The software, called Morphic, is now installed on all of the computers across campus at several major universities and libraries in the US and Canada. It makes computers simpler to use, and allows whichever features or assistive technologies a person needs to appear on any computer they encounter (that has Morphic on it) and want to use at school, work, library, community center, etc. This demonstration will cover both the basic and advanced features as well as how to get free copies of the open-source software and configure it for school, work or personal use. It will also highlight lessons learned from the placements.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8620129/pdf/nihms-1752258.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39942022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accessing Passersby Proxemic Signals through a Head-Worn Camera: Opportunities and Limitations for the Blind.","authors":"Kyungjun Lee, Daisuke Sato, Saki Asakawa, Chieko Asakawa, Hernisa Kacorri","doi":"10.1145/3441852.3471232","DOIUrl":"10.1145/3441852.3471232","url":null,"abstract":"<p><p>The spatial behavior of passersby can be critical to blind individuals to initiate interactions, preserve personal space, or practice social distancing during a pandemic. Among other use cases, wearable cameras employing computer vision can be used to extract proxemic signals of others and thus increase access to the spatial behavior of passersby for blind people. Analyzing data collected in a study with blind (N=10) and sighted (N=40) participants, we explore: (i) visual information on approaching passersby captured by a head-worn camera; (ii) pedestrian detection algorithms for extracting proxemic signals such as passerby presence, relative position, distance, and head pose; and (iii) opportunities and limitations of using wearable cameras for helping blind people access proxemics related to nearby people. Our observations and findings provide insights into dyadic behaviors for assistive pedestrian detection and lead to implications for the design of future head-worn cameras and interactions.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"21 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855357/pdf/nihms-1752252.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sharing Practices for Datasets Related to Accessibility and Aging.","authors":"Rie Kamikubo, Utkarsh Dwivedi, Hernisa Kacorri","doi":"10.1145/3441852.3471208","DOIUrl":"10.1145/3441852.3471208","url":null,"abstract":"<p><p>Datasets sourced from people with disabilities and older adults play an important role in innovation, benchmarking, and mitigating bias for both assistive and inclusive AI-infused applications. However, they are scarce. We conduct a systematic review of 137 accessibility datasets manually located across different disciplines over the last 35 years. Our analysis highlights how researchers navigate tensions between benefits and risks in data collection and sharing. We uncover patterns in data collection purpose, terminology, sample size, data types, and data sharing practices across communities of focus. We conclude by critically reflecting on challenges and opportunities related to locating and sharing accessibility datasets calling for technical, legal, and institutional privacy frameworks that are more attuned to concerns from these communities.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855358/pdf/nihms-1752251.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosiana Natalie, Jolene Loh Kar Inn, Tan Huei Suen, Joshua Tseng Shi Hao, Hernisa Kacorri, Kotaro Hara
{"title":"Uncovering Patterns in Reviewers' Feedback to Scene Description Authors.","authors":"Rosiana Natalie, Jolene Loh Kar Inn, Tan Huei Suen, Joshua Tseng Shi Hao, Hernisa Kacorri, Kotaro Hara","doi":"10.1145/3441852.3476550","DOIUrl":"10.1145/3441852.3476550","url":null,"abstract":"<p><p>Audio descriptions (ADs) can increase access to videos for blind people. Researchers have explored different mechanisms for generating ADs, with some of the most recent studies involving paid novices; to improve the quality of their ADs, novices receive feedback from reviewers. However, reviewer feedback is not instantaneous. To explore the potential for real-time feedback through automation, in this paper, we analyze 1, 120 comments that 40 sighted novices received from a sighted or a blind reviewer. We find that feedback patterns tend to fall under four themes: (i) <b>Quality</b>; commenting on different AD quality variables, (ii) <b>Speech Act</b>; the utterance or speech action that the reviewers used, (iii) <b>Required Action</b>; the recommended action that the authors should do to improve the AD, and (iv) <b>Guidance</b>; the additional help that the reviewers gave to help the authors. We discuss which of these patterns could be automated within the review process as design implications for future AD collaborative authoring systems.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"93 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855355/pdf/nihms-1752255.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosiana Natalie, Joshua Tseng, Jolene Loh, Ian Luke Yi-Ren Chan, Huei Suen Tan, Ebrima H Jarjue, Hernisa Kacorri, Kotaro Hara
{"title":"The Efficacy of Collaborative Authoring of Video Scene Descriptions.","authors":"Rosiana Natalie, Joshua Tseng, Jolene Loh, Ian Luke Yi-Ren Chan, Huei Suen Tan, Ebrima H Jarjue, Hernisa Kacorri, Kotaro Hara","doi":"10.1145/3441852.3471201","DOIUrl":"10.1145/3441852.3471201","url":null,"abstract":"<p><p>The majority of online video contents remain inaccessible to people with visual impairments due to the lack of audio descriptions to depict the video scenes. Content creators have traditionally relied on professionals to author audio descriptions, but their service is costly and not readily-available. We investigate the feasibility of creating more cost-effective audio descriptions that are also of high quality by involving novices. Specifically, we designed, developed, and evaluated ViScene, a web-based collaborative audio description authoring tool that enables a sighted novice author and a reviewer either sighted or blind to interact and contribute to scene descriptions (SDs)-text that can be transformed into audio through text-to-speech. Through a mixed-design study with <i>N</i> = 60 participants, we assessed the quality of SDs created by sighted novices with feedback from both sighted and blind reviewers. Our results showed that with ViScene novices could produce content that is Descriptive, Objective, Referable, and Clear at a cost of <i>i.e.,</i> US$2.81pvm to US$5.48pvm, which is 54% to 96% lower than the professional service. However, the descriptions lacked in other quality dimensions (<i>e.g.,</i> learning, a measure of how well an SD conveys the video's intended message). While professional audio describers remain the gold standard, for content creators who cannot afford it, ViScene offers a cost-effective alternative, ultimately leading to a more accessible medium.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"17 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855356/pdf/nihms-1752253.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TableView: Enabling Eficient Access to Web Data Records for Screen-Magnifier Users.","authors":"Hae-Na Lee, Sami Uddin, Vikas Ashok","doi":"10.1145/3373625.3417030","DOIUrl":"https://doi.org/10.1145/3373625.3417030","url":null,"abstract":"<p><p>People with visual impairments typically rely on screen-magnifier assistive technology to interact with webpages. As screen-magnifier users can only view a portion of the webpage content in an enlarged form at any given time, they have to endure an inconvenient and arduous process of repeatedly moving the magnifier focus back-and-forth over different portions of the webpage in order to make comparisons between data records, e.g., comparing the available fights in a travel website based on their prices, durations, etc. To address this issue, we designed and developed TableView, a browser extension that leverages a state-of-the art information extraction method to automatically identify and extract data records and their attributes in a webpage, and subsequently presents them to a user in a compactly arranged tabular format that needs significantly less screen space compared to that currently occupied by these items in the page. This way, TableView is able to pack more items within the magnifier focus, thereby reducing the overall content area for panning, and hence making it easy for screen-magnifier users to compare different items before making their selections. A user study with 16 low vision participants showed that with TableView, the time spent on panning the data records in webpages was significantly reduced by 72.9% (avg.) compared to that with just a screen magnifier, and 66.5% compared to that with a screen magnifier using a space compaction method.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3373625.3417030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25455684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Screen Magnification for Office Applications.","authors":"Hae-Na Lee, Vikas Ashok, I V Ramakrishnan","doi":"10.1145/3373625.3418049","DOIUrl":"https://doi.org/10.1145/3373625.3418049","url":null,"abstract":"<p><p>People with low vision use screen magnifiers to interact with computers. They usually need to zoom and pan with the screen magnifier using predefined keyboard and mouse actions. When using office productivity applications (e.g., word processors and spreadsheet applications), the spatially distributed arrangement of UI elements makes interaction a challenging proposition for low vision users, as they can only view a fragment of the screen at any moment. They expend significant chunks of time panning back-and-forth between application ribbons containing various commands (e.g., formatting, design, review, references, etc.) and the main edit area containing user content. In this demo, we will demonstrate MagPro, an interface augmentation to office productivity tools, that not only reduces the interaction effort of low-vision screen-magnifier users by bringing the application commands as close as possible to the users' current focus in the edit area, but also lets them easily explore these commands using simple mouse actions. Moreover, MagPro automatically synchronizes the magnifier viewport with the keyboard cursor, so that users can always see what they are typing, without having to manually adjust the magnifier focus every time the keyboard cursor goes of screen during text entry.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3373625.3418049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25358127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ViScene: A Collaborative Authoring Tool for Scene Descriptions in Videos.","authors":"Rosiana Natalie, Ebrima Jarjue, Hernisa Kacorri, Kotaro Hara","doi":"10.1145/3373625.3418030","DOIUrl":"10.1145/3373625.3418030","url":null,"abstract":"<p><p>Audio descriptions can make the visual content in videos accessible to people with visual impairments. However, the majority of the online videos lack audio descriptions due in part to the shortage of experts who can create high-quality descriptions. We present ViScene, a web-based authoring tool that taps into the larger pool of sighted non-experts to help them generate high-quality descriptions via two feedback mechanisms-succinct visualizations and comments from an expert. Through a mixed-design study with <i>N</i> = 6 participants, we explore the usability of ViScene and the quality of the descriptions created by sighted non-experts with and without feedback comments. Our results indicate that non-experts can produce better descriptions with feedback comments; preliminary insights also highlight the role that people with visual impairments can play in providing this feedback.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8286807/pdf/nihms-1716337.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39202022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Utku Uckun, Ali Selman Aydin, Vikas Ashok, I V Ramakrishnan
{"title":"Ontology-Driven Transformations for PDF Form Accessibility.","authors":"Utku Uckun, Ali Selman Aydin, Vikas Ashok, I V Ramakrishnan","doi":"10.1145/3373625.3418047","DOIUrl":"10.1145/3373625.3418047","url":null,"abstract":"<p><p>Filling out PDF forms with screen readers has always been a challenge for people who are blind. Many of these forms are not interactive and hence are not accessible; even if they are interactive, the serial reading order of the screen reader makes it difficult to associate the correct labels with the form fields. This demo will present TransPAc[5], an assistive technology that enables blind people to fill out PDF forms. Since blind people are familiar with web browsing, TransPAc leverages this fact by faithfully transforming a PDF document with forms into a HTML page. The blind user fills out the form fields in the HTML page with their screen reader and these filled-in data values are transparently transferred onto the corresponding form fields in the PDF document. TransPAc thus addresses a long standing problem in PDF form accessibility.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7871703/pdf/nihms-1664031.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25358129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Role of Sensory Changes in Everyday Technology use by People with Mild to Moderate Dementia.","authors":"Emma Dixon, Amanda Lazar","doi":"10.1145/3373625.3417000","DOIUrl":"10.1145/3373625.3417000","url":null,"abstract":"<p><p>Technology design for dementia primarily focuses on cognitive needs. This includes providing task support, accommodating memory changes, and simplifying interfaces by reducing complexity. However, research has demonstrated that dementia affects not only the cognitive abilities of people with dementia, but also their sensory and motor abilities. This work provides a first step towards understanding the interaction between sensory changes and technology use by people with dementia through interviews with people with mild to moderate dementia and practitioners. Our analysis yields an understanding of strategies to use technology to overcome sensory changes associated with dementia as well as barriers to using certain technologies. We present new directions for the design of technologies for people with mild to moderate dementia, including intentional sensory stimulation to facilitate comprehension, as well as opportunities to leverage advances in technology design from other disabilities for dementia.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8299872/pdf/nihms-1710953.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39221676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}