{"title":"KnowHow: Contextual Audio-Assistance for the Visually Impaired in Performing Everyday Tasks","authors":"A. Agarwal, Sujeath Pareddy, Swaminathan Manohar","doi":"10.1145/2983310.2989196","DOIUrl":null,"url":null,"abstract":"We present a device for visually impaired persons (VIPs) that delivers contextual audio assistance for physical objects and tasks. In initial observations, we found ubiquitous use of audio-assistance technologies by VIPs for interacting with computing devices, such as Android TalkBack. However, we also saw that devices without screens frequently lack accessibility features. Our solution allows a VIP to obtain audio assistance in the presence of an arbitrary physical interface or object through a chest-mounted device. On-board are camera sensors that point towards the user's personal front-facing grasping region. Upon detecting certain gestures such as picking up an object, the device provides helpful contextual audio information to the user. Textual interfaces can be read aloud by sliding a finger over the surface of the object, allowing the user to hear a document or receive audio guidance for non-assistively-enabled electronic devices. The user may provide questions verbally in order to refine their audio assistance, or to ask broad questions about their environment. Our motivation is to provide sensemaking faculties that creatively approximate those of non-VIPs in tasks that make VIPs ineligible for common employment opportunities.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2016 Symposium on Spatial User Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2983310.2989196","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We present a device for visually impaired persons (VIPs) that delivers contextual audio assistance for physical objects and tasks. In initial observations, we found ubiquitous use of audio-assistance technologies by VIPs for interacting with computing devices, such as Android TalkBack. However, we also saw that devices without screens frequently lack accessibility features. Our solution allows a VIP to obtain audio assistance in the presence of an arbitrary physical interface or object through a chest-mounted device. On-board are camera sensors that point towards the user's personal front-facing grasping region. Upon detecting certain gestures such as picking up an object, the device provides helpful contextual audio information to the user. Textual interfaces can be read aloud by sliding a finger over the surface of the object, allowing the user to hear a document or receive audio guidance for non-assistively-enabled electronic devices. The user may provide questions verbally in order to refine their audio assistance, or to ask broad questions about their environment. Our motivation is to provide sensemaking faculties that creatively approximate those of non-VIPs in tasks that make VIPs ineligible for common employment opportunities.