The Influence of the Viewpoint in a Self-Avatar on Body Part and Self-Localization

Albert H. van der Veer, Adrian J. T. Alsmith, M. Longo, Hong Yu Wong, D. Diers, Matthias Bues, Anna P. Giron, B. Mohler
{"title":"The Influence of the Viewpoint in a Self-Avatar on Body Part and Self-Localization","authors":"Albert H. van der Veer, Adrian J. T. Alsmith, M. Longo, Hong Yu Wong, D. Diers, Matthias Bues, Anna P. Giron, B. Mohler","doi":"10.1145/3343036.3343124","DOIUrl":null,"url":null,"abstract":"The goal of this study is to determine how a self-avatar in virtual reality, experienced from different viewpoints on the body (at eye- or chest-height), might influence body part localization, as well as self-localization within the body. Previous literature shows that people do not locate themselves in only one location, but rather primarily in the face and the upper torso. Therefore, we aimed to determine if manipulating the viewpoint to either the height of the eyes or to the height of the chest would influence self-location estimates towards these commonly identified locations of self. In a virtual reality (VR) headset, participants were asked to point at several of their body parts (body part localization) as well as ”directly at you” (self-localization) with a virtual pointer. Both pointing tasks were performed before and after a self-avatar adaptation phase where participants explored a co-located, scaled, gender-matched, and animated self-avatar. We hypothesized that experiencing a self-avatar might reduce inaccuracies in body part localization, and that viewpoint would influence pointing responses for both body part and self-localization. Participants overall pointed relatively accurately to some of their body parts (shoulders, chin, and eyes), but very inaccurately to others, with large undershooting for the hips, knees, and feet, and large overshooting for the top of the head. Self-localization was spread across the body (as well as above the head) with the following distribution: the upper face (25%), the upper torso (25%), above the head (15%) and below the torso (12%). We only found an influence of viewpoint (eye- vs chest-height) during the self-avatar adaptation phase for body part localization and not for self-localization. The overall change in error distance for body part localization for the viewpoint at eye-height was small (M = –2.8 cm), while the overall change in error distance for the viewpoint at chest-height was significantly larger, and in the upwards direction relative to the body parts (M = 21.1 cm). In a post-questionnaire, there was no significant difference in embodiment scores between the viewpoint conditions. Most interestingly, having a self-avatar did not change the results on the self-localization pointing task, even with a novel viewpoint (chest-height). Possibly, body-based cues, or memory, ground the self when in VR. However, the present results caution the use of altered viewpoints in applications where veridical position sense of body parts is required.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Symposium on Applied Perception 2019","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3343036.3343124","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

The goal of this study is to determine how a self-avatar in virtual reality, experienced from different viewpoints on the body (at eye- or chest-height), might influence body part localization, as well as self-localization within the body. Previous literature shows that people do not locate themselves in only one location, but rather primarily in the face and the upper torso. Therefore, we aimed to determine if manipulating the viewpoint to either the height of the eyes or to the height of the chest would influence self-location estimates towards these commonly identified locations of self. In a virtual reality (VR) headset, participants were asked to point at several of their body parts (body part localization) as well as ”directly at you” (self-localization) with a virtual pointer. Both pointing tasks were performed before and after a self-avatar adaptation phase where participants explored a co-located, scaled, gender-matched, and animated self-avatar. We hypothesized that experiencing a self-avatar might reduce inaccuracies in body part localization, and that viewpoint would influence pointing responses for both body part and self-localization. Participants overall pointed relatively accurately to some of their body parts (shoulders, chin, and eyes), but very inaccurately to others, with large undershooting for the hips, knees, and feet, and large overshooting for the top of the head. Self-localization was spread across the body (as well as above the head) with the following distribution: the upper face (25%), the upper torso (25%), above the head (15%) and below the torso (12%). We only found an influence of viewpoint (eye- vs chest-height) during the self-avatar adaptation phase for body part localization and not for self-localization. The overall change in error distance for body part localization for the viewpoint at eye-height was small (M = –2.8 cm), while the overall change in error distance for the viewpoint at chest-height was significantly larger, and in the upwards direction relative to the body parts (M = 21.1 cm). In a post-questionnaire, there was no significant difference in embodiment scores between the viewpoint conditions. Most interestingly, having a self-avatar did not change the results on the self-localization pointing task, even with a novel viewpoint (chest-height). Possibly, body-based cues, or memory, ground the self when in VR. However, the present results caution the use of altered viewpoints in applications where veridical position sense of body parts is required.
《自我化身》中的观点对身体部位和自我定位的影响
本研究的目的是确定虚拟现实中的自我化身,从身体的不同角度体验(在眼睛或胸部高度),可能会影响身体部位定位,以及身体内部的自我定位。先前的文献表明,人们不会只把自己定位在一个位置,而是主要集中在脸部和上半身。因此,我们的目的是确定将视点操纵到眼睛的高度或胸部的高度是否会影响对这些通常识别的自我位置的自我定位估计。在虚拟现实(VR)耳机中,参与者被要求用虚拟指针指向他们身体的几个部位(身体部位定位),以及“直接指向你”(自我定位)。这两项指向任务都是在自我化身适应阶段之前和之后进行的,在这个阶段中,参与者探索了一个位于同一位置的、缩放的、性别匹配的和动画的自我化身。我们假设,体验自我化身可能会减少身体部位定位的不准确性,并且这种观点会影响身体部位和自我定位的指向反应。总的来说,参与者相对准确地指向了他们身体的某些部位(肩膀、下巴和眼睛),但对其他部位的指向却非常不准确,臀部、膝盖和脚的指向明显偏下,而头顶的指向则明显偏上。自我定位遍布全身(以及头部以上),分布如下:上脸(25%)、上躯干(25%)、头部上方(15%)和躯干下方(12%)。在自我化身适应阶段,我们只发现视点(眼高和胸高)对身体部位定位有影响,而对自我定位没有影响。眼高视点的身体部位定位误差距离总体变化较小(M = -2.8 cm),而胸高视点的身体部位定位误差距离总体变化较大,且相对于身体部位呈向上变化(M = 21.1 cm)。在问卷后测中,不同视点条件的体现得分无显著差异。最有趣的是,拥有一个自我化身并没有改变自我定位指向任务的结果,即使是一个新的视角(胸部高度)。可能是基于身体的线索或记忆,在虚拟现实中奠定了自我。然而,目前的结果警告使用改变视点的应用,其中身体部位的垂直位置感是必需的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信