IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Hit Around: Substitutional Moving Robot for Immersive and Exertion Interaction with Encountered-Type Haptic Hit Around:代入式移动机器人,具有身临其境和用力交互的偶遇式触觉。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549556
Yu-Hsiang Weng;Ping-Hsuan Han;Kuan-Ning Chang;Chi-Yu Lin;Chia-Hui Lin;Ho Yin Ng;Chien-Hsing Chou;Wen-Hsin Chiu
{"title":"Hit Around: Substitutional Moving Robot for Immersive and Exertion Interaction with Encountered-Type Haptic","authors":"Yu-Hsiang Weng;Ping-Hsuan Han;Kuan-Ning Chang;Chi-Yu Lin;Chia-Hui Lin;Ho Yin Ng;Chien-Hsing Chou;Wen-Hsin Chiu","doi":"10.1109/TVCG.2025.3549556","DOIUrl":"10.1109/TVCG.2025.3549556","url":null,"abstract":"Previous works have shown the potential of immersive technologies to make physical activities a more engaging experience. With encountered-type haptic feedback, users can perceive a more realistic sensation for exertion interaction in substitutions reality. Although substitutional reality has utilized physical environments, props, and devices to provide encountered-type haptic feedback, these cannot withstand the fierce force of humans and do not give feedback when users move around simultaneously, such as in combat sports. In this work, we present Hit Around, a substitutional moving robot for immersive and exertion interaction, in which the user can move and punch the virtual opponent and perceive encountered-type haptic feedback anywhere. We gathered insight into immersive exertion interaction from three exhibitions with iterative prototypes, then designed and implemented the hardware system and application. To understand the ability of mobility and weight loading, we conducted two technical evaluations and a laboratory experiment to validate the feasibility. Finally, a field deployment study explored the limitations and challenges of developing immersive exertion interaction with encountered-type haptics.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3569-3579"},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection Thresholds for Replay and Real-Time Discrepancies in VR Hand Redirection VR手重定向中重播和实时差异的检测阈值。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549571
Kiyu Tanaka;Takuto Nakamura;Keigo Matsumoto;Hideaki Kuzuoka;Takuji Narumi
{"title":"Detection Thresholds for Replay and Real-Time Discrepancies in VR Hand Redirection","authors":"Kiyu Tanaka;Takuto Nakamura;Keigo Matsumoto;Hideaki Kuzuoka;Takuji Narumi","doi":"10.1109/TVCG.2025.3549571","DOIUrl":"10.1109/TVCG.2025.3549571","url":null,"abstract":"Hand redirection, which subtly adjusts a user's hand movements in a virtual environment, can modify perception and movement by providing real-time corrections to motor feedback. In the context of motor learning and rehabilitation, observing replays of movements has been shown to enhance motor function. The application of hand redirection to these replays by making movements appear larger or smaller than they actually are has the potential to improve motor function. However, the detection threshold for hand redirection, specifically in the context of motion replays, remains unclear, as it has primarily been studied in real-time feedback settings. This study aims to determine the threshold at which hand redirection during post-exercise replay sessions becomes detectable. We conducted two psychophysical experiments to evaluate how much discrepancy between replayed and actual movements can go unnoticed by users, both with hand redirection (N=20) and without (N=18). Our findings reveal a tendency for the amount of movement during replay to be underestimated. Furthermore, compared to conventional real-time hand redirection without replay, replay manipulations involving redirection applied during the preceding reaching task resulted in a significantly larger JND. These insights are crucial for leveraging hand redirection techniques in replay-based motor learning applications.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2767-2775"},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10919225","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Social Experiences in Immersive Virtual Reality with Artificial Facial Mimicry 利用人工面部模仿增强沉浸式虚拟现实中的社交体验。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549163
Alessandro Visconti;Davide Calandra;Federica Giorgione;Fabrizio Lamberti
{"title":"Enhancing Social Experiences in Immersive Virtual Reality with Artificial Facial Mimicry","authors":"Alessandro Visconti;Davide Calandra;Federica Giorgione;Fabrizio Lamberti","doi":"10.1109/TVCG.2025.3549163","DOIUrl":"10.1109/TVCG.2025.3549163","url":null,"abstract":"The growing availability of affordable Virtual Reality (VR) hardware and the increasing interest in the Metaverse are driving the expansion of Social VR (SVR) platforms. These platforms allow users to embody avatars in immersive social virtual environments, enabling real-time interactions using consumer devices. Beyond merely replicating real-life social dynamics, SVR platforms offer opportunities to surpass real-world constraints by augmenting these interactions. One example of such augmentation is Artificial Facial Mimicry (AFM), which holds significant potential to enhance social experiences. Mimicry, the unconscious imitation of verbal and non-verbal behaviors, has been shown to positively affect human-agent interactions, yet its role in avatar-mediated human-to-human communication remains under-explored. AFM presents various possibilities, such as amplifying emotional expressions, or substituting one emotion for another to better align with the context. Furthermore, AFM can address the limitations of current facial tracking technologies in fully capturing users' emotions. To investigate the potential benefits of AFM in SVR, an automated AM system was developed. This system provides AFM, along with other kinds of head mimicry (nodding and eye contact), and it is compatible with consumer VR devices equipped with facial tracking. This system was deployed within a test-bench immersive SVR application. A between-dyads user study was conducted to assess the potential benefits of AFM for interpersonal communication while maintaining avatar behavioral naturalness, comparing the experiences of pairs of participants communicating with AFM enabled against a baseline condition. Subjective measures revealed that AFM improved interpersonal closeness, aspects of social attraction, interpersonal trust, social presence, and naturalness compared to the baseline condition. These findings demonstrate AFM's positive impact on key aspects of social interaction and highlight its potential applications across various SVR domains.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3325-3335"},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Impact of Video Pass-Through Embodiment on Presence and Performance in Virtual Reality. 研究视频传递体现对虚拟现实中存在和表现的影响。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549891
Kristoffer Waldow, Constantin Kleinbeck, Arnulph Fuhrmann, Daniel Roth
{"title":"Investigating the Impact of Video Pass-Through Embodiment on Presence and Performance in Virtual Reality.","authors":"Kristoffer Waldow, Constantin Kleinbeck, Arnulph Fuhrmann, Daniel Roth","doi":"10.1109/TVCG.2025.3549891","DOIUrl":"10.1109/TVCG.2025.3549891","url":null,"abstract":"<p><p>Creating a compelling sense of presence and embodiment can enhance the user experience in virtual reality (VR). One method to accomplish this is through self-representation with embodied personalized avatars or video self-avatars. However, these approaches require external hardware and primarily evaluate hand representations in VR across various tasks. We therefore present in this paper an alternative approach: video Pass-Through Embodiment (PTE), which utilizes the per-eye real-time depth map from Head-Mounted Displays (HMDs) traditionally used for Augmented Reality features. This method allows the user's real body to be cut out of the pass-through video stream and be represented in the VR environment without the need for additional hardware. To evaluate our approach, we conducted a between-subjects study involving 40 participants who completed a seated object sorting task using either PTE or a customized avatar. The results show that PTE, despite its limited depth resolution that leads to some visual artifacts, significantly enhances the user's sense of presence and embodiment. In addition, PTE does not negatively affect task performance, cognitive load, or cause VR sickness. These findings imply that video pass-through embodiment offers a practical and efficient alternative to traditional avatar-based methods in VR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SeamlessVR: Bridging the Immersive to Non-Immersive Visualization Divide. 无缝虚拟现实:弥合沉浸式和非沉浸式可视化的鸿沟。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549564
Shuqi Liao, Sparsh Chaudhri, Maanas K Karwa, Voicu Popescu
{"title":"SeamlessVR: Bridging the Immersive to Non-Immersive Visualization Divide.","authors":"Shuqi Liao, Sparsh Chaudhri, Maanas K Karwa, Voicu Popescu","doi":"10.1109/TVCG.2025.3549564","DOIUrl":"10.1109/TVCG.2025.3549564","url":null,"abstract":"<p><p>The paper describes SeamlessVR, a method for switching effectively from immersive visualization, in a virtual reality (VR) headset, to non-immersive visualization, on screen. SeamlessVR implements a continuous morph of the 3D visualization to a 2D visualization that matches what the user will see on screen after removing the headset. This visualization continuity reduces the cognitive effort of connecting the immersive to the non-immersive visualization, helping the user continue on screen a visualization task started in the headset. We have compared SeamlessVR to the conventional approach of directly removing the headset in an IRB-approved user study with N = 30 participants. SeamlessVR had a significant advantage over the conventional approach in terms of time and accuracy for target tracking in complex abstract and realistic scenes and in terms of participants' perception of the switch from immersive to non-immersive visualization, as well as in terms of usability. SeamlessVR did not pose cybersickness concerns.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Focus-Driven Augmented Feedback: Enhancing Focus and Maintaining Engagement in Upper Limb Virtual Reality Rehabilitation 焦点驱动的增强反馈:在上肢虚拟现实康复中增强焦点和保持参与。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549543
Kai-Lun Liao;Mengjie Huang;Jiajia Shi;Min Chen;Rui Yang
{"title":"Focus-Driven Augmented Feedback: Enhancing Focus and Maintaining Engagement in Upper Limb Virtual Reality Rehabilitation","authors":"Kai-Lun Liao;Mengjie Huang;Jiajia Shi;Min Chen;Rui Yang","doi":"10.1109/TVCG.2025.3549543","DOIUrl":"10.1109/TVCG.2025.3549543","url":null,"abstract":"Integrating biofeedback technology, such as real-time eye-tracking, has revolutionized the landscape of virtual reality (VR) rehabilitation games, offering new opportunities for personalized therapy. Motivated to increase patient focus during rehabilitation, the Focus-Driven Augmented Feedback (FDAF) system was developed to enhance focus and maintain engagement during upper limb VR rehabilitation. This novel approach dynamically adjusts augmented visual feedback based on a patient's gaze, creating a personalised rehabilitation experience tailored to individual needs. This research aims to develop and comprehensively evaluate the FDAF system to enhance patient focus and maintain engagement in VR rehabilitation environments. The methodology involved three experimental studies, which tested varying levels of augmented feedback with 71 healthy participants and 17 patients requiring upper limb rehabilitation. The results demonstrated that a 30% augmented level was optimal for healthy participants, while a 20% was most effective for patients, ensuring sustained engagement without inducing discomfort. The research's findings highlight the potential of eye-tracking technology to dynamically customise feedback in VR rehabilitation, leading to more effective therapy and improved patient outcomes. This research contributes significant advancements in developing personalised VR rehabilitation techniques, offering valuable insights for future therapeutic applications.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2653-2663"},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
X's Day: Personality-Driven Virtual Human Behavior Generation X's Day:人格驱动的虚拟人行为生成。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549574
Haoyang Li;Zan Wang;Wei Liang;Yizhuo Wang
{"title":"X's Day: Personality-Driven Virtual Human Behavior Generation","authors":"Haoyang Li;Zan Wang;Wei Liang;Yizhuo Wang","doi":"10.1109/TVCG.2025.3549574","DOIUrl":"10.1109/TVCG.2025.3549574","url":null,"abstract":"Developing convincing and realistic virtual human behavior is essential for enhancing user experiences in virtual reality (VR) and augmented reality (AR) settings. This paper introduces a novel task focused on generating long-term behaviors for virtual agents, guided by specific personality traits and contextual elements within 3D environments. We present a comprehensive framework capable of autonomously producing daily activities autoregressively. By modeling the intricate connections between personality characteristics and observable activities, we establish a hierarchical structure of Needs, Task, and Activity levels. Integrating a Behavior Planner and a World State module allows for the dynamic sampling of behaviors using large language models (LLMs), ensuring that generated activities remain relevant and responsive to environmental changes. Extensive experiments validate the effectiveness and adaptability of our approach across diverse scenarios. This research makes a significant contribution to the field by establishing a new paradigm for personalized and context-aware interactions with virtual humans, ultimately enhancing user engagement in immersive applications. Our project website is at: https://behavior.agent-x.cn/.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3514-3524"},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MPGS: Multi-plane Gaussian Splatting for Compact Scenes Rendering. MPGS:用于紧凑场景渲染的多平面高斯飞溅。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549551
Deqi Li, Shi-Sheng Huang, Hua Huang
{"title":"MPGS: Multi-plane Gaussian Splatting for Compact Scenes Rendering.","authors":"Deqi Li, Shi-Sheng Huang, Hua Huang","doi":"10.1109/TVCG.2025.3549551","DOIUrl":"10.1109/TVCG.2025.3549551","url":null,"abstract":"<p><p>Accurate reconstruction of heterogeneous scenes for high-fidelity rendering in an efficient manner remains a crucial but challenging task in many Virtual Reality and Augmented Reality applications. The recent 3D Gaussian Splatting (3DGS) has shown impressive quality in scene rendering with real-time performance. However, for heterogeneous scenes with many weak-textured regions, the original 3DGS can easily produce numerously wrong floaters with unbalanced reconstruction using redundant 3D Gaussians, which often leads to unsatisfied scene rendering. This paper proposes a novel multi-plane Gaussian Splatting (MPGS), which aims to achieve high-fidelity rendering with compact reconstruction for heterogeneous scenes. The key insight of our MPGS is the introduction of a novel multi-plane Gaussian optimization strategy, which effectively adjusts the Gaussian distribution for both rich-textured and weak-textured regions in heterogeneous scenes. Moreover, we further propose a multi-scale geometric correction mechanism to effectively mitigate degradation of the 3D Gaussian distribution for compact scene reconstruction. Besides, we regularize the Gaussian distributions using normal information extracted from the compact scene learning. Experimental results on public datasets demonstrate that the proposed MPGS achieves much better rendering quality compared to previous methods, while using less storage and offering more efficient rendering. To our best knowledge, MPGS is a new state-of-the-art 3D Gaussian splatting method for compact reconstruction of heterogeneous scenes, enabling high-fidelity rendering in novel view synthesis, especially improving rendering quality for weak-textured regions. The code will be released at https://github.com/wanglids/MPGS.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Proprioceptive Attenuation with Noisy Tendon Electrical Stimulation on Adaptation to Beyond-Real Interaction 嘈杂肌腱电刺激对超现实互动适应性的影响
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549562
Maki Ogawa;Keigo Matsumoto;Kazuma Aoyama;Takuji Narumi
{"title":"Effects of Proprioceptive Attenuation with Noisy Tendon Electrical Stimulation on Adaptation to Beyond-Real Interaction","authors":"Maki Ogawa;Keigo Matsumoto;Kazuma Aoyama;Takuji Narumi","doi":"10.1109/TVCG.2025.3549562","DOIUrl":"10.1109/TVCG.2025.3549562","url":null,"abstract":"Virtual reality (VR) enables beyond-real interactions (BRI) that transcend physical constraints, offering effective user experiences like extending a hand to grasp distant objects. However, adapting to novel mappings of BRI often reduces performance and the sense of embodiment. To address this, we propose using noisy tendon electrical stimulation (n-TES) to decrease proprioceptive precision. Previous studies have suggested that attenuating proprioceptive precision is crucial for sensory-motor adaptations. Thus, we hypothesize that n-TES, which has been shown to reduce proprioceptive precision and induce visual-dependent perception in VR, can enhance user adaptation to BRI. We conducted a user study using go-go interaction, a BRI technique for interacting with distant objects, to assess the effects of n-TES. Given the individual variability in n-TES response, participants first underwent a proprioceptive precision test to determine the optimal stimulation intensity to lower the proprioceptive precision from 5 levels $(sigma=0.25-125text{mA})$. Reaching tasks using a 2x2 within-participants design evaluated the effects of go-go interaction and n-TES on performance, subjective task load, and embodiment. Results from 24 participants showed that go-go interaction increased reaching time and task load while decreasing the sense of embodiment. Contrary to our hypothesis, n-TES did not significantly mitigate most of these negative effects of go-go interaction, except that perceived agency was higher with n-TES during go-go interaction. The limited effectiveness of n-TES may be due to participants' habituation or sensory adaptation during the tasks. Future research should consider the adaptation process to BRI and investigate different BRI scenarios.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2600-2610"},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10918877","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Don't They Really Hear Us? A Design Space for Private Conversations in Social Virtual Reality. 他们真的听不见我们吗?社交虚拟现实中私人对话的设计空间。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549844
Josephus Jasper Limbago, Robin Welsch, Florian Muller, Mario Di Francesco
{"title":"Don't They Really Hear Us? A Design Space for Private Conversations in Social Virtual Reality.","authors":"Josephus Jasper Limbago, Robin Welsch, Florian Muller, Mario Di Francesco","doi":"10.1109/TVCG.2025.3549844","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549844","url":null,"abstract":"<p><p>Seamless transition between public dialogue and private talks is essential in everyday conversations. Social Virtual Reality (VR) has revolutionized interpersonal communication by creating a sense of closeness over distance through virtual avatars. However, existing social VR platforms are not successful in providing safety and supporting private conversations, thereby hindering self-disclosure and limiting the potential for meaningful experiences. We approach this problem by exploring the factors affecting private conversations in social VR applications, including the usability of different interaction methods and the awareness with respect to the virtual world. We conduct both expert interviews and a controlled experiment with a social VR prototype we realized. We then leverage the outcomes of the two studies to establish a design space that considers diverse dimensions (including privacy levels, social awareness, and modalities), laying the groundwork for more intuitive and meaningful experiences of private conversation in social VR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信