Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment

Tzi-Dong Jeremy Ng, Xiao Hu, Y. Que
{"title":"Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment","authors":"Tzi-Dong Jeremy Ng, Xiao Hu, Y. Que","doi":"10.1145/3506860.3506881","DOIUrl":null,"url":null,"abstract":"In times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners’ self-report data. Learners’ attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners’ eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners’ attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"LAK22: 12th International Learning Analytics and Knowledge Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3506860.3506881","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

In times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners’ self-report data. Learners’ attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners’ eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners’ attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research.
眼动追踪虚拟遗产环境多模态评价研究
在大流行引发的挑战时期,虚拟现实(VR)使观众能够在没有时间和空间限制的情况下了解文化遗产地。VR内容的设计在很大程度上由专业人士决定,而内容的评估往往依赖于学习者的自我报告数据。学习者的注意力集中和对VR内容的理解可能会受到包括文本和视听在内的不同多媒体元素的存在或缺失的影响。哪种设计变化更有利于了解遗产遗址,这仍然是一个悬而未决的问题。利用最近多模态学习分析(MmLA)研究中经常采用的眼球追踪技术,我们进行了一项实验,收集并分析了40名学习者的眼球运动和自我报告数据。统计测试和热图启发访谈的结果表明:1)无论是否有音频解说,VR环境中的文本都能帮助学习者更好地理解所呈现的遗产地;2)文本将学习者的注意力从将遗产地语境化的其他视觉元素上转移开;3)只有音频解说最能模拟真实世界的遗产地游览体验;4)文字解说能促使学习者更快地阅读文本。我们提出了改进VR学习材料设计的建议,并讨论了对MmLA研究的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信