Hernisa Kacorri, Utkarsh Dwivedi, Sravya Amancherla, Mayanka K Jha, Riya Chanduka
{"title":"IncluSet: A Data Surfacing Repository for Accessibility Datasets.","authors":"Hernisa Kacorri, Utkarsh Dwivedi, Sravya Amancherla, Mayanka K Jha, Riya Chanduka","doi":"10.1145/3373625.3418026","DOIUrl":"10.1145/3373625.3418026","url":null,"abstract":"<p><p>Datasets and data sharing play an important role for innovation, benchmarking, mitigating bias, and understanding the complexity of real world AI-infused applications. However, there is a scarcity of available data generated by people with disabilities with the potential for training or evaluating machine learning models. This is partially due to smaller populations, disparate characteristics, lack of expertise for data annotation, as well as privacy concerns. Even when data are collected and are publicly available, it is often difficult to locate them. We present a novel data surfacing repository, called IncluSet, that allows researchers and the disability community to discover and link accessibility datasets. The repository is pre-populated with information about 139 existing datasets: 65 made publicly available, 25 available upon request, and 49 not shared by the authors but described in their manuscripts. More importantly, IncluSet is designed to expose existing and new dataset contributions so they may be discoverable through Google Dataset Search.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"72 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8375514/pdf/nihms-1716335.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39349004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Revisiting Blind Photography in the Context of Teachable Object Recognizers.","authors":"Kyungjun Lee, Jonggi Hong, Simone Pimento, Ebrima Jarjue, Hernisa Kacorri","doi":"10.1145/3308561.3353799","DOIUrl":"10.1145/3308561.3353799","url":null,"abstract":"<p><p>For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest in the camera frame. Our audio-haptic feedback is powered by a deep learning model that estimates the object center location based on its proximity to the user's hand. To evaluate our approach, we conducted a user study in the lab, where participants with visual impairments (<i>N</i> = 9) used our feedback to train and test their object recognizer in vanilla and cluttered environments. We found that very few photos did not include the object (2% in the vanilla and 8% in the cluttered) and the recognition performance was promising even for participants with no prior camera experience. Participants tended to trust the feedback even though they know it can be wrong. Our cluster analysis indicates that better feedback is associated with photos that include the entire object. Our results provide insights into factors that can degrade feedback and recognition performance in teachable interfaces.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2019 ","pages":"83-95"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7415326/pdf/nihms-1609036.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38252920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Modules to Teach Accessibility in a User-Centered Design Course.","authors":"Amanda Lazar, Jonathan Lazar, Alisha Pradhan","doi":"10.1145/3308561.3354632","DOIUrl":"10.1145/3308561.3354632","url":null,"abstract":"<p><p>Courses in user-centered design, where students learn about centering design on the needs of individuals, is one natural point in which accessibility content can be injected into the curriculum. We describe the approach we have taken with sections in the undergraduate User-Centered Design Course at the University of Maryland, College Park. We initially introduced disability and accessibility in four modules: 1) websites and design portfolios, 2) introduction to understanding user needs, 3) prototyping, and 4) UX evaluation. We present a description of this content that was taught as an extended version in one Fall 2018 section and as an abbreviated version in all sections in Spring 2019. Survey results indicate that students' understanding of accessibility and assistive technology increased with the introduction of these modules.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2019 ","pages":"554-556"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7377301/pdf/nihms-1609037.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38186490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kathryn E Ringland, Jennifer Nicholas, Rachel Kornfield, Emily G Lattie, David C Mohr, Madhu Reddy
{"title":"Understanding Mental Ill-health as Psychosocial Disability: Implications for Assistive Technology.","authors":"Kathryn E Ringland, Jennifer Nicholas, Rachel Kornfield, Emily G Lattie, David C Mohr, Madhu Reddy","doi":"10.1145/3308561.3353785","DOIUrl":"10.1145/3308561.3353785","url":null,"abstract":"<p><p>Psychosocial disability involves actual or perceived impairment due to a diversity of mental, emotional, or cognitive experiences. While assistive technology for psychosocial disabilities has been understudied in communities such as ASSETS, advances in computing have opened up a number of new avenues for assisting those with psychosocial disabilities beyond the clinic. However, these tools continue to emerge primarily within the framework of \"treatment,\" emphasizing resolution or improvement of mental health symptoms. This work considers what it means to adopt a social model lens from disability studies and incorporate the expertise of assistive technology researchers in relation to mental health. Our investigation draws on interviews conducted with 18 individuals who have complex health needs that include mental health symptoms. This work highlights the potential role for assistive technology in supporting psychosocial disability outside of a clinical or medical framework.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2019 ","pages":"156-170"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7918274/pdf/nihms-1673380.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25424615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating Author and User Experience for an Audio-Haptic System for Annotation of Physical Models.","authors":"James M Coughlan, Joshua Miele","doi":"10.1145/3132525.3134811","DOIUrl":"10.1145/3132525.3134811","url":null,"abstract":"<p><p>We describe three usability studies involving a prototype system for creation and haptic exploration of labeled locations on 3D objects. The system uses a computer, webcam, and fiducial markers to associate a physical 3D object in the camera's view with a predefined digital map of labeled locations (\"hotspots\"), and to do real-time finger tracking, allowing a blind or visually impaired user to explore the object and hear individual labels spoken as each hotspot is touched. This paper describes: (a) a formative study with blind users exploring pre-annotated objects to assess system usability and accuracy; (b) a focus group of blind participants who used the system and, through structured and unstructured discussion, provided feedback on its practicality, possible applications, and real-world potential; and (c) a formative study in which a sighted adult used the system to add labels to on-screen images of objects, demonstrating the practicality of remote annotation of 3D models. These studies and related literature suggest potential for future iterations of the system to benefit blind and visually impaired users in educational, professional, and recreational contexts.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2017 ","pages":"369-370"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5714613/pdf/nihms919789.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35322706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergio Mascetti, Silvia D'Acquisto, Andrea Gerino, Mattia Ducci, Cristian Bernareggi, James M Coughlan
{"title":"JustPoint: Identifying Colors with a Natural User Interface.","authors":"Sergio Mascetti, Silvia D'Acquisto, Andrea Gerino, Mattia Ducci, Cristian Bernareggi, James M Coughlan","doi":"10.1145/3132525.3134802","DOIUrl":"10.1145/3132525.3134802","url":null,"abstract":"<p><p>People with severe visual impairments usually have no way of identifying the colors of objects in their environment. While existing smartphone apps can recognize colors and speak them aloud, they require the user to center the object of interest in the camera's field of view, which is challenging for many users. We developed a smartphone app to address this problem that reads aloud the color of the object pointed to by the user's fingertip, without confusion from background colors. We evaluated the app with nine people who are blind, demonstrating the app's effectiveness and suggesting directions for improvements in the future.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2017 ","pages":"329-330"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5714614/pdf/nihms919790.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35322705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Syed Masum Billah, Vikas Ashok, Donald E Porter, I V Ramakrishnan
{"title":"Speed-Dial: A Surrogate Mouse for Non-Visual Web Browsing.","authors":"Syed Masum Billah, Vikas Ashok, Donald E Porter, I V Ramakrishnan","doi":"10.1145/3132525.3132531","DOIUrl":"https://doi.org/10.1145/3132525.3132531","url":null,"abstract":"<p><p>Sighted people can browse the Web almost exclusively using a mouse. This is because web browsing mostly entails pointing and clicking on some element in the web page, and these two operations can be done almost instantaneously with a computer mouse. Unfortunately, people with vision impairments cannot use a mouse as it only provides visual feedback through a cursor. Instead, they are forced to go through a slow and tedious process of building a mental map of the web page, relying primarily on a screen reader's keyboard shortcuts and its serial audio readout of the textual content of the page, including metadata. This can often cause content and cognitive overload. This paper describes our Speed-Dial system which uses an off-the-shelf physical Dial as a surrogate for the mouse for non-visual web browsing. Speed-Dial interfaces the physical Dial with the semantic model of a web page, and provides an intuitive and rapid access to the entities and their content in the model, thereby bringing blind people's browsing experience closer to how sighted people perceive and interact with the Web. A user study with blind participants suggests that with Speed-Dial they can quickly move around the web page to select content of interest, akin to pointing and clicking with a mouse.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2017 ","pages":"110-119"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3132525.3132531","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36326437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Syed Masum Billah, Vikas Ashok, Donald E Porter, I V Ramakrishnan
{"title":"A Platform Agnostic Remote Desktop System for Screen Reading.","authors":"Syed Masum Billah, Vikas Ashok, Donald E Porter, I V Ramakrishnan","doi":"10.1145/2982142.2982151","DOIUrl":"https://doi.org/10.1145/2982142.2982151","url":null,"abstract":"<p><p>Remote desktop technology, the enabler of access to applications hosted on remote hosts, relies primarily on scraping the pixels on the remote screen and redrawing them as a simple bitmap on the client's local screen. Such a technology will simply not work with screen readers since the latter are innately tied to reading text. Since screen readers are locked-in to a specific OS platform, extant solutions that enable remote access with screen readers such as NVDARemote and JAWS Tandem require homogeneity of OS platforms at both the client and remote sites. This demo will present Sinter, a system that eliminates this requirement. With Sinter, a blind Mac user, for example, can now access a remote Windows application with VoiceOver, a scenario heretofore not possible.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2016 ","pages":"283-284"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2982142.2982151","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35288067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Rituerto, Giovanni Fusco, James M Coughlan
{"title":"Towards a Sign-Based Indoor Navigation System for People with Visual Impairments.","authors":"Alejandro Rituerto, Giovanni Fusco, James M Coughlan","doi":"10.1145/2982142.2982202","DOIUrl":"10.1145/2982142.2982202","url":null,"abstract":"<p><p>Navigation is a challenging task for many travelers with visual impairments. While a variety of GPS-enabled tools can provide wayfinding assistance in outdoor settings, GPS provides no useful localization information indoors. A variety of indoor navigation tools are being developed, but most of them require potentially costly physical infrastructure to be installed and maintained, or else the creation of detailed visual models of the environment. We report development of a new smartphone-based navigation aid, which combines inertial sensing, computer vision and floor plan information to estimate the user's location with no additional physical infrastructure and requiring only the locations of signs relative to the floor plan. A formative study was conducted with three blind volunteer participants demonstrating the feasibility of the approach and highlighting the areas needing improvement.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2016 ","pages":"287-288"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5714555/pdf/nihms919788.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35319657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giovanni Fusco, Ender Tekin, Richard E Ladner, James M Coughlan
{"title":"Using Computer Vision to Access Appliance Displays.","authors":"Giovanni Fusco, Ender Tekin, Richard E Ladner, James M Coughlan","doi":"10.1145/2661334.2661404","DOIUrl":"https://doi.org/10.1145/2661334.2661404","url":null,"abstract":"<p><p>People who are blind or visually impaired face difficulties accessing a growing array of everyday appliances, needed to perform a variety of daily activities, because they are equipped with electronic displays. We are developing a \"Display Reader\" smartphone app, which uses computer vision to help a user acquire a usable image of a display, to address this problem. The current prototype analyzes video from the smartphone's camera, providing real-time feedback to guide the user until a satisfactory image is acquired, based on automatic estimates of image blur and glare. Formative studies were conducted with several blind and visually impaired participants, whose feedback is guiding the development of the user interface. The prototype software has been released as a Free and Open Source (FOSS) project.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2014 ","pages":"281-282"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2661334.2661404","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32925817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}