Hanna Tschakert, Florian Lang, Markus Wieland, Albrecht Schmidt, Tonja Machulla
{"title":"用数据集和机器学习方法对家用电器界面元素进行分类和增强,以支持视障人士","authors":"Hanna Tschakert, Florian Lang, Markus Wieland, Albrecht Schmidt, Tonja Machulla","doi":"10.1145/3581641.3584038","DOIUrl":null,"url":null,"abstract":"Many modern household appliances are challenging to operate for people with visual impairment. Low-contrast designs and insufficient tactile feedback make it difficult to distinguish interface elements and to recognize their function. Augmented reality (AR) can be used to visually highlight such elements and provide assistance to people with residual vision. To realize this goal, we (1) created a dataset consisting of 13,702 images of interfaces from household appliances and manually labeled control elements; (2) trained a neural network to recognize control elements and to distinguish between PushButton, TouchButton, Knob, Slider, and Toggle; and (3) designed various contrast-rich and visually simple AR augmentations for these elements. The results were implemented as a screen-based assistive AR application, which we tested in a user study with six individuals with visual impairment. Participants were able to recognize control elements that were imperceptible without the assistive application. The approach was well received, especially for the potential of familiarizing oneself with novel devices. The automatic parsing and augmentation of interfaces provide an important step toward the independent interaction of people with visual impairments with their everyday environment.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Dataset and Machine Learning Approach to Classify and Augment Interface Elements of Household Appliances to Support People with Visual Impairment\",\"authors\":\"Hanna Tschakert, Florian Lang, Markus Wieland, Albrecht Schmidt, Tonja Machulla\",\"doi\":\"10.1145/3581641.3584038\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many modern household appliances are challenging to operate for people with visual impairment. Low-contrast designs and insufficient tactile feedback make it difficult to distinguish interface elements and to recognize their function. Augmented reality (AR) can be used to visually highlight such elements and provide assistance to people with residual vision. To realize this goal, we (1) created a dataset consisting of 13,702 images of interfaces from household appliances and manually labeled control elements; (2) trained a neural network to recognize control elements and to distinguish between PushButton, TouchButton, Knob, Slider, and Toggle; and (3) designed various contrast-rich and visually simple AR augmentations for these elements. The results were implemented as a screen-based assistive AR application, which we tested in a user study with six individuals with visual impairment. Participants were able to recognize control elements that were imperceptible without the assistive application. The approach was well received, especially for the potential of familiarizing oneself with novel devices. The automatic parsing and augmentation of interfaces provide an important step toward the independent interaction of people with visual impairments with their everyday environment.\",\"PeriodicalId\":118159,\"journal\":{\"name\":\"Proceedings of the 28th International Conference on Intelligent User Interfaces\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 28th International Conference on Intelligent User Interfaces\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3581641.3584038\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 28th International Conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3581641.3584038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Dataset and Machine Learning Approach to Classify and Augment Interface Elements of Household Appliances to Support People with Visual Impairment
Many modern household appliances are challenging to operate for people with visual impairment. Low-contrast designs and insufficient tactile feedback make it difficult to distinguish interface elements and to recognize their function. Augmented reality (AR) can be used to visually highlight such elements and provide assistance to people with residual vision. To realize this goal, we (1) created a dataset consisting of 13,702 images of interfaces from household appliances and manually labeled control elements; (2) trained a neural network to recognize control elements and to distinguish between PushButton, TouchButton, Knob, Slider, and Toggle; and (3) designed various contrast-rich and visually simple AR augmentations for these elements. The results were implemented as a screen-based assistive AR application, which we tested in a user study with six individuals with visual impairment. Participants were able to recognize control elements that were imperceptible without the assistive application. The approach was well received, especially for the potential of familiarizing oneself with novel devices. The automatic parsing and augmentation of interfaces provide an important step toward the independent interaction of people with visual impairments with their everyday environment.