{"title":"打破障碍:通过语音助手为视觉障碍者提供虚拟博物馆导航的新方法","authors":"Yeliz Yücel, Kerem Rızvanoğlu","doi":"10.1016/j.ijhcs.2024.103403","DOIUrl":null,"url":null,"abstract":"<div><div>People with visual imparments (PWVI) encounter challenges in accessing cultural, historical, and practical information in a predominantly visual world, limiting their participation in various activities, including visits to museums.Museums, as important centers for exploration and learning, often overlook these accessibility issues.This abstract presents the iMuse Model, an innovative approach to create accessible and inclusive museum environments for them.The iMuse Model centers around the co-design of a prototype voice assistant integrated into Google Home, aimed at enabling remote navigation for PWVI within the Basilica Cistern museum in Turkey.This model consists of a two-layer study.The first layer involves collaboration with PWVI and their sight loss instructors to develop a five level framework tailored to their unique needs and challenges.The second layer focuses on testing this design with 30 people with visual impairments, employing various methodologies, including the Wizard of Oz technique.Our prototype provides inclusive audio descriptions that encompass sensory, emotional, historical, and structural elements, along with spatialized sounds from the museum environment, improving spatial understanding and cognitive map development.Notably, we have developed two versions of the voice assistant: one with a humorous interaction and one with a non-humorous approach. Users expressed a preference for the humorous version, leading to increased interaction, enjoyment, and social learning, as supported by both qualitative and quantitative results.In conclusion, the iMuse Model highlights the potential of co-designed, humor-infused, and culturally sensitive voice assistants.Our model not only aid PWVI in navigating unfamiliar spaces but also enhance their social learning, engagement, and appreciation of cultural heritage within museum environments.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103403"},"PeriodicalIF":5.3000,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Breaking down barriers: A new approach to virtual museum navigation for people with visual impairments through voice assistants\",\"authors\":\"Yeliz Yücel, Kerem Rızvanoğlu\",\"doi\":\"10.1016/j.ijhcs.2024.103403\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>People with visual imparments (PWVI) encounter challenges in accessing cultural, historical, and practical information in a predominantly visual world, limiting their participation in various activities, including visits to museums.Museums, as important centers for exploration and learning, often overlook these accessibility issues.This abstract presents the iMuse Model, an innovative approach to create accessible and inclusive museum environments for them.The iMuse Model centers around the co-design of a prototype voice assistant integrated into Google Home, aimed at enabling remote navigation for PWVI within the Basilica Cistern museum in Turkey.This model consists of a two-layer study.The first layer involves collaboration with PWVI and their sight loss instructors to develop a five level framework tailored to their unique needs and challenges.The second layer focuses on testing this design with 30 people with visual impairments, employing various methodologies, including the Wizard of Oz technique.Our prototype provides inclusive audio descriptions that encompass sensory, emotional, historical, and structural elements, along with spatialized sounds from the museum environment, improving spatial understanding and cognitive map development.Notably, we have developed two versions of the voice assistant: one with a humorous interaction and one with a non-humorous approach. Users expressed a preference for the humorous version, leading to increased interaction, enjoyment, and social learning, as supported by both qualitative and quantitative results.In conclusion, the iMuse Model highlights the potential of co-designed, humor-infused, and culturally sensitive voice assistants.Our model not only aid PWVI in navigating unfamiliar spaces but also enhance their social learning, engagement, and appreciation of cultural heritage within museum environments.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"194 \",\"pages\":\"Article 103403\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1071581924001861\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581924001861","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
摘要
视觉障碍者(PWVI)在以视觉为主的世界中获取文化、历史和实用信息时会遇到困难,这限制了他们参与各种活动,包括参观博物馆。本摘要介绍了 iMuse 模型,这是一种为他们创造无障碍和包容性博物馆环境的创新方法。iMuse 模型的核心是共同设计一个集成在谷歌 Home 中的语音助手原型,目的是在土耳其大教堂蓄水池博物馆内为残疾人提供远程导航。我们的原型提供了包含感官、情感、历史和结构元素的包容性语音描述,以及来自博物馆环境的空间化声音,从而改善了空间理解和认知地图开发。总之,iMuse 模型凸显了共同设计、注入幽默和文化敏感性的语音助手的潜力。我们的模型不仅能帮助无障碍参观者浏览陌生空间,还能提高他们在博物馆环境中的社交学习、参与度和对文化遗产的鉴赏力。
Breaking down barriers: A new approach to virtual museum navigation for people with visual impairments through voice assistants
People with visual imparments (PWVI) encounter challenges in accessing cultural, historical, and practical information in a predominantly visual world, limiting their participation in various activities, including visits to museums.Museums, as important centers for exploration and learning, often overlook these accessibility issues.This abstract presents the iMuse Model, an innovative approach to create accessible and inclusive museum environments for them.The iMuse Model centers around the co-design of a prototype voice assistant integrated into Google Home, aimed at enabling remote navigation for PWVI within the Basilica Cistern museum in Turkey.This model consists of a two-layer study.The first layer involves collaboration with PWVI and their sight loss instructors to develop a five level framework tailored to their unique needs and challenges.The second layer focuses on testing this design with 30 people with visual impairments, employing various methodologies, including the Wizard of Oz technique.Our prototype provides inclusive audio descriptions that encompass sensory, emotional, historical, and structural elements, along with spatialized sounds from the museum environment, improving spatial understanding and cognitive map development.Notably, we have developed two versions of the voice assistant: one with a humorous interaction and one with a non-humorous approach. Users expressed a preference for the humorous version, leading to increased interaction, enjoyment, and social learning, as supported by both qualitative and quantitative results.In conclusion, the iMuse Model highlights the potential of co-designed, humor-infused, and culturally sensitive voice assistants.Our model not only aid PWVI in navigating unfamiliar spaces but also enhance their social learning, engagement, and appreciation of cultural heritage within museum environments.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...