理解交互虚拟现实中多模态具体化体验的集成框架

Florent Robert, Hui-Yin Wu, L. Sassatelli, Stephen Ramanoël, A. Gros, M. Winckler
{"title":"理解交互虚拟现实中多模态具体化体验的集成框架","authors":"Florent Robert, Hui-Yin Wu, L. Sassatelli, Stephen Ramanoël, A. Gros, M. Winckler","doi":"10.1145/3573381.3596150","DOIUrl":null,"url":null,"abstract":"Virtual Reality (VR) technology enables “embodied interactions” in realistic environments where users can freely move and interact, with deep physical and emotional states. However, a comprehensive understanding of the embodied user experience is currently limited by the extent to which one can make relevant observations, and the accuracy at which observations can be interpreted. Paul Dourish proposed a way forward through the characterisation of embodied interactions in three senses: ontology, intersubjectivity, and intentionality. In a joint effort between computer and neuro-scientists, we built a framework to design studies that investigate multimodal embodied experiences in VR, and apply it to study the impact of simulated low-vision on user navigation. Our methodology involves the design of 3D scenarios annotated with an ontology, modelling intersubjective tasks, and correlating multimodal metrics such as gaze and physiology to derive intentions. We show how this framework enables a more fine-grained understanding of embodied interactions in behavioural research.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An Integrated Framework for Understanding Multimodal Embodied Experiences in Interactive Virtual Reality\",\"authors\":\"Florent Robert, Hui-Yin Wu, L. Sassatelli, Stephen Ramanoël, A. Gros, M. Winckler\",\"doi\":\"10.1145/3573381.3596150\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Virtual Reality (VR) technology enables “embodied interactions” in realistic environments where users can freely move and interact, with deep physical and emotional states. However, a comprehensive understanding of the embodied user experience is currently limited by the extent to which one can make relevant observations, and the accuracy at which observations can be interpreted. Paul Dourish proposed a way forward through the characterisation of embodied interactions in three senses: ontology, intersubjectivity, and intentionality. In a joint effort between computer and neuro-scientists, we built a framework to design studies that investigate multimodal embodied experiences in VR, and apply it to study the impact of simulated low-vision on user navigation. Our methodology involves the design of 3D scenarios annotated with an ontology, modelling intersubjective tasks, and correlating multimodal metrics such as gaze and physiology to derive intentions. We show how this framework enables a more fine-grained understanding of embodied interactions in behavioural research.\",\"PeriodicalId\":120872,\"journal\":{\"name\":\"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences\",\"volume\":\"49 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3573381.3596150\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3573381.3596150","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

虚拟现实(VR)技术可以在现实环境中实现“具身互动”,用户可以自由移动和互动,具有深刻的身体和情感状态。然而,对具体用户体验的全面理解目前受到人们能够进行相关观察的程度以及观察结果可以解释的准确性的限制。Paul Dourish从本体论、主体间性和意向性三个方面提出了一种对具身互动进行表征的方法。在计算机和神经科学家的共同努力下,我们建立了一个框架来设计研究VR中的多模态体现体验,并将其应用于研究模拟低视力对用户导航的影响。我们的方法包括设计带有本体注释的3D场景,建模主体间任务,并关联多模态指标,如凝视和生理学,以获得意图。我们展示了这个框架如何使行为研究中体现的相互作用的更细粒度的理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An Integrated Framework for Understanding Multimodal Embodied Experiences in Interactive Virtual Reality
Virtual Reality (VR) technology enables “embodied interactions” in realistic environments where users can freely move and interact, with deep physical and emotional states. However, a comprehensive understanding of the embodied user experience is currently limited by the extent to which one can make relevant observations, and the accuracy at which observations can be interpreted. Paul Dourish proposed a way forward through the characterisation of embodied interactions in three senses: ontology, intersubjectivity, and intentionality. In a joint effort between computer and neuro-scientists, we built a framework to design studies that investigate multimodal embodied experiences in VR, and apply it to study the impact of simulated low-vision on user navigation. Our methodology involves the design of 3D scenarios annotated with an ontology, modelling intersubjective tasks, and correlating multimodal metrics such as gaze and physiology to derive intentions. We show how this framework enables a more fine-grained understanding of embodied interactions in behavioural research.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信