Hantian Wu, Hongyi Yang, Fangyuan Chang, Dian Zhu, Zhao Liu
{"title":"人工智能生成的视障儿童触觉图形:多模态教育产品的可用性研究","authors":"Hantian Wu, Hongyi Yang, Fangyuan Chang, Dian Zhu, Zhao Liu","doi":"10.1016/j.ijhcs.2025.103525","DOIUrl":null,"url":null,"abstract":"<div><div>Approximately 70 million children aged 0–14 worldwide have visual impairments, limiting their language acquisition and image recognition development due to a lack of visual input. Human-computer interaction technologies provide an opportunity to learn Braille and images through touch and auditory stimuli, replacing traditional visual input elements. However, effectively integrating these sensory inputs remains a challenge. To address this, we developed \"TaleVision,\" a product that uses large language models to instantly generate touchable images and sound effects, allowing visually impaired children to better understand the connection between Braille and images. We recruited five sighted children and 14 children with congenital visual impairments to test our product. Almost all participants reported high satisfaction and rated the user experience highly. Additionally, we also observed how visually impaired children and teachers interacted with our products to improve teaching and learning. Based on the results, we discussed best design practices, process recommendations, and potential future solutions.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"200 ","pages":"Article 103525"},"PeriodicalIF":5.3000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI-generated tactile graphics for visually impaired children: A usability study of a multimodal educational product\",\"authors\":\"Hantian Wu, Hongyi Yang, Fangyuan Chang, Dian Zhu, Zhao Liu\",\"doi\":\"10.1016/j.ijhcs.2025.103525\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Approximately 70 million children aged 0–14 worldwide have visual impairments, limiting their language acquisition and image recognition development due to a lack of visual input. Human-computer interaction technologies provide an opportunity to learn Braille and images through touch and auditory stimuli, replacing traditional visual input elements. However, effectively integrating these sensory inputs remains a challenge. To address this, we developed \\\"TaleVision,\\\" a product that uses large language models to instantly generate touchable images and sound effects, allowing visually impaired children to better understand the connection between Braille and images. We recruited five sighted children and 14 children with congenital visual impairments to test our product. Almost all participants reported high satisfaction and rated the user experience highly. Additionally, we also observed how visually impaired children and teachers interacted with our products to improve teaching and learning. Based on the results, we discussed best design practices, process recommendations, and potential future solutions.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"200 \",\"pages\":\"Article 103525\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2025-04-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1071581925000825\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581925000825","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
AI-generated tactile graphics for visually impaired children: A usability study of a multimodal educational product
Approximately 70 million children aged 0–14 worldwide have visual impairments, limiting their language acquisition and image recognition development due to a lack of visual input. Human-computer interaction technologies provide an opportunity to learn Braille and images through touch and auditory stimuli, replacing traditional visual input elements. However, effectively integrating these sensory inputs remains a challenge. To address this, we developed "TaleVision," a product that uses large language models to instantly generate touchable images and sound effects, allowing visually impaired children to better understand the connection between Braille and images. We recruited five sighted children and 14 children with congenital visual impairments to test our product. Almost all participants reported high satisfaction and rated the user experience highly. Additionally, we also observed how visually impaired children and teachers interacted with our products to improve teaching and learning. Based on the results, we discussed best design practices, process recommendations, and potential future solutions.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...