IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Portable Silent Room: Exploring VR Design for Anxiety and Emotion Regulation for Neurodivergent Women and Non-Binary Individuals. 便携式安静房间:探索VR设计对神经分化女性和非二元个体的焦虑和情绪调节。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-06 DOI: 10.1109/TVCG.2025.3616828
Kinga Skiers, Yun Suen Pai, Marina Nakagawa, Kouta Minamizawa, Giulia Barbareschi
{"title":"Portable Silent Room: Exploring VR Design for Anxiety and Emotion Regulation for Neurodivergent Women and Non-Binary Individuals.","authors":"Kinga Skiers, Yun Suen Pai, Marina Nakagawa, Kouta Minamizawa, Giulia Barbareschi","doi":"10.1109/TVCG.2025.3616828","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616828","url":null,"abstract":"<p><p>Neurodivergent individuals, particularly those with Autism and Attention Deficit Hyperactivity Disorder (ADHD), frequently experience anxiety, panic attacks, meltdowns, and emotional dysregulation due to societal pressures and inadequate accommodations. These challenges are especially pronounced for neurodivergent women and non-binary individuals navigating intersecting barriers of neurological differences and gender expectations. This research investigates virtual reality (VR) as a portable safe space for emotional regulation, addressing challenges of sensory overload and motion sickness while enhancing relaxation capabilities. Our mixed-methods approach included an online survey (N = 223) and an ideation workshop (N = 32), which provided key design elements for creating effective calming VR environments. Based on these findings, we developed and iteratively tested VR prototypes with neurodivergent women and non-binary participants (N = 12), leading to a final version offering enhanced adaptability to individual sensory needs. This final prototype underwent a comprehensive evaluation with 25 neurodivergent participants to assess its effectiveness as a regulatory tool. This research contributes to the development of inclusive, adaptive VR environments that function as personalized \"portable silent rooms\" offering neurodivergent individuals on-demand access to sensory regulation regardless of physical location.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
p-Blend: Privacy- and Utility-Preserving Blendshape Perturbation Against Re-identification Attacks in Virtual Reality. p-Blend:保护隐私和效用的混合形状摄动对抗虚拟现实中的再识别攻击。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-06 DOI: 10.1109/TVCG.2025.3616736
Jingwei Liu, Lai Wei, Yan Hu, Guangrong Zhao, Qing Yang, Guangdong Bai, Yiran Shen
{"title":"p-Blend: Privacy- and Utility-Preserving Blendshape Perturbation Against Re-identification Attacks in Virtual Reality.","authors":"Jingwei Liu, Lai Wei, Yan Hu, Guangrong Zhao, Qing Yang, Guangdong Bai, Yiran Shen","doi":"10.1109/TVCG.2025.3616736","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616736","url":null,"abstract":"<p><p>In this paper, we propose p-Blend, an efficient and effective blendshape perturbation mechanism designed to defend against both intra- and cross-app re-identification attacks in virtual reality. p-Blend provides privacy protection when streaming blendshape data to third-party applications on VR devices. In its design, we consider both privacy and utility. p-Blend not only perturbs blendshape values to resist re-identification attacks but also preserves the smoothness of facial animations and the naturalness of facial expressions, ensuring the continued usability of the data. We validate the effectiveness of p-Blend through extensive empirical evaluations and user studies. Quantitative experiments on a large-scale dataset collected from 45 participants demonstrate that p-Blend significantly reduces re-identification accuracy across a range of machine learning models. While pure-random perturbation fails to prevent attacks that exploit statistical features, p-Blend effectively mitigates these risks in both raw and statistical blendshape data. Additionally, user study results show that facial animations generated from p-Blend-perturbed blendshapes maintain greater smoothness and naturalness compared to those using purely random perturbation. The codes and dataset are available at https://github.com/jingwei1016/p-Blend.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Contrastive Learning for Cybersickness Recognition Using Brain Connectivity Graph Representation. 基于脑连接图表征的晕动病识别的多模态对比学习。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-06 DOI: 10.1109/TVCG.2025.3616797
Peike Wang, Ming Li, Ziteng Wang, Yong-Jin Liu, Lili Wang
{"title":"Multimodal Contrastive Learning for Cybersickness Recognition Using Brain Connectivity Graph Representation.","authors":"Peike Wang, Ming Li, Ziteng Wang, Yong-Jin Liu, Lili Wang","doi":"10.1109/TVCG.2025.3616797","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616797","url":null,"abstract":"<p><p>Cybersickness significantly impairs user comfort and immersion in virtual reality (VR). Effective identification of cybersickness leveraging physiological, visual, and motion data is a critical prerequisite for its mitigation. However, current methods primarily employ direct feature fusion across modalities, which often leads to limited accuracy due to inadequate modeling of inter-modal relationships. In this paper, we propose a multimodal contrastive learning method for cybersickness recognition. First, we introduce Brain Connectivity Graph Representation (BCGR), an innovative graph-based representation that captures cybersickness-related connectivity patterns across modalities. We further develop three BCGR instances: E-BCGR, constructed based on EEG signals; MV-BCGR, constructed based on video and motion data; and S-BCGR, obtained through our proposed standardized decomposition algorithm. Then, we propose a connectivity-constrained contrastive fusion module, which aligns E-BCGR and MV-BCGR into a shared latent space via graph contrastive learning while utilizing S-BCGR as a connectivity constraint to enhance representation quality. Moreover, we construct a multimodal cybersickness dataset comprising synchronized EEG, video, and motion data collected in VR environments to promote further research in this domain. Experimental results demonstrate that our method outperforms existing state-of-the-art methods across four critical evaluation metrics: accuracy, sensitivity, specificity, and the area under the curve. Source code: https://github.com/PEKEW/cybersickness-bcgr.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EnVisionVR: A Scene Interpretation Tool for Visual Accessibility in Virtual Reality. EnVisionVR:虚拟现实中视觉可达性的场景解释工具。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-06 DOI: 10.1109/TVCG.2025.3617147
Junlong Chen, Rosella P Galindo Esparza, Vanja Garaj, Per Ola Kristensson, John Dudley
{"title":"EnVisionVR: A Scene Interpretation Tool for Visual Accessibility in Virtual Reality.","authors":"Junlong Chen, Rosella P Galindo Esparza, Vanja Garaj, Per Ola Kristensson, John Dudley","doi":"10.1109/TVCG.2025.3617147","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3617147","url":null,"abstract":"<p><p>Effective visual accessibility in Virtual Reality (VR) is crucial for Blind and Low Vision (BLV) users. However, designing visual accessibility systems is challenging due to the complexity of 3D VR environments and the need for techniques that can be easily retrofitted into existing applications. While prior work has studied how to enhance or translate visual information, the advancement of Vision Language Models (VLMs) provides an exciting opportunity to advance the scene interpretation capability of current systems. This paper presents EnVisionVR, an accessibility tool for VR scene interpretation. Through a formative study of usability barriers, we confirmed the lack of visual accessibility features as a key barrier for BLV users of VR content and applications. In response, we used our findings from the formative study to inform the design and development of EnVisionVR, a novel visual accessibility system leveraging a VLM, voice input and multimodal feedback for scene interpretation and virtual object interaction in VR. An evaluation with 12 BLV users demonstrated that EnVisionVR significantly improved their ability to locate virtual objects, effectively supporting scene understanding and object interaction.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"I was truly able to express the image of myself that I have within": Exploring VR Group Therapy Approaches with the LGBTQIA+ community. “我真的能够表达我自己的形象”:探索VR团体治疗方法与LGBTQIA+社区。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-06 DOI: 10.1109/TVCG.2025.3616754
Kinga Skiers, Danyang Peng, Anish Kundu, Tanner Person, Kenkichi Takase, Tamii Nagoshi, Sawako Nakayama, Yano Yuichiro, Tomoyuki Miyazaki, Kouta Minamizawa, Giulia Barbareschi
{"title":"\"I was truly able to express the image of myself that I have within\": Exploring VR Group Therapy Approaches with the LGBTQIA+ community.","authors":"Kinga Skiers, Danyang Peng, Anish Kundu, Tanner Person, Kenkichi Takase, Tamii Nagoshi, Sawako Nakayama, Yano Yuichiro, Tomoyuki Miyazaki, Kouta Minamizawa, Giulia Barbareschi","doi":"10.1109/TVCG.2025.3616754","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616754","url":null,"abstract":"<p><p>Members of the LGBTQIA+ community are more likely to face mental health challenges. However, stigma and the fear of being outed often prevent them from seeking professional support. To address this, we collaborated with mental health professionals and LGBTQIA+ communities in Japan to develop a multi-user Virtual Reality (VR) platform that facilitates access to group therapy sessions. The system allows users to participate using personalized avatars and customized voices, preserving anonymity while enabling them to present themselves as they wish. We conducted a user study with 21 LGBTQIA+ participants and two qualified counselors to evaluate their experiences with VR-based therapy. Findings revealed that the created avatars enabled participants to express their chosen gender identity and increase confidence, acting as protective intermediaries. However, participants also noted how anonymity could affect trust, and suggested that better representation of body language and the introduction of trust-building activities could help compensate for such ambivalence. Overall, the platform fostered a strong sense of co-presence, and both counselors and LGBTQIA+ members felt that, with some ergonomic adjustment to improve the comfort of the headset during longer sessions, VR platforms could offer substantial opportunities for safe and representative access to mental health services.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-illumination-interfered Neural Holography with Expanded Eyebox. 扩展眼箱的多照度干扰神经全息术。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-06 DOI: 10.1109/TVCG.2025.3616793
Xinxing Xia, Pengfei Mi, Yiqing Tao, Xiangyu Meng, Wenbin Zhou, Yingjie Yu, Yifan Peng
{"title":"Multi-illumination-interfered Neural Holography with Expanded Eyebox.","authors":"Xinxing Xia, Pengfei Mi, Yiqing Tao, Xiangyu Meng, Wenbin Zhou, Yingjie Yu, Yifan Peng","doi":"10.1109/TVCG.2025.3616793","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616793","url":null,"abstract":"<p><p>Holography has immense potential for near-eye displays in virtual and augmented reality (VR/AR), providing natural 3D depth cues through wavefront reconstruction. However, balancing the field of view (FOV) with the eyebox remains challenging, constrained by the étendue limitation. Additionally, holographic image quality is often compromised due to differences between actual wave propagation and simulation models. This study addresses these by expanding the eyebox via multi-angle illumination, and enhancing image quality with end-to-end pupil-aware hologram optimization. Further, energy efficiency is improved by incorporating higher-order diffractions and pupil constraints. We explore a Pupil-HOGD algorithm for multi-angle illumination and validate it with a dual-angle holographic display prototype. Integrated with camera calibration and tracked eye position, the developed Pupil-HOGD algorithm improves image quality and expands the eyebox by 50% horizontally. We envision this approach extends the space-bandwidth product (SBP) of holographic displays, enabling broader applications in immersive, high-quality visual computing.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
See What I Mean? Mobile Eye-Perspective Rendering for Optical See-through Head-mounted Displays. 明白我的意思了吧?用于光学透明头戴式显示器的移动眼透视渲染。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-06 DOI: 10.1109/TVCG.2025.3616739
Gerlinde Emsenhuber, Tobias Langlotz, Denis Kalkofen, Markus Tatzgern
{"title":"See What I Mean? Mobile Eye-Perspective Rendering for Optical See-through Head-mounted Displays.","authors":"Gerlinde Emsenhuber, Tobias Langlotz, Denis Kalkofen, Markus Tatzgern","doi":"10.1109/TVCG.2025.3616739","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616739","url":null,"abstract":"<p><p>Image-based scene understanding allows Augmented Reality (AR) systems to provide contextual visual guidance in unprepared, real-world environments. While effective on video see-through (VST) head-mounted displays (HMDs), such methods suffer on optical see-through (OST) HMDs due to misregistration between the world-facing camera and the user's eye perspective. To approximate the user's true eye view, we implement and evaluate three software-based eye-perspective rendering (EPR) techniques on a commercially available, untethered OST HMD (Microsoft HoloLens 2): (1) Plane-Proxy EPR, projecting onto a fixed-distance plane; (2) Mesh-Proxy EPR, using SLAM-based reconstruction for projection; and (3) Gaze-Proxy EPR, a novel eye-tracking-based method that aligns the projection with the user's gaze depth. A user study on real-world tasks underscores the importance of accurate EPR and demonstrates gaze-proxy as a lightweight alternative to geometry-based methods. We release our EPR framework as open source.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DenseSplat: Densifying Gaussian Splatting SLAM with Neural Radiance Prior. DenseSplat:密集高斯溅射SLAM与神经辐射先验。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-06 DOI: 10.1109/TVCG.2025.3617961
Mingrui Li, Shuhong Liu, Tianchen Deng, Hongyu Wang
{"title":"DenseSplat: Densifying Gaussian Splatting SLAM with Neural Radiance Prior.","authors":"Mingrui Li, Shuhong Liu, Tianchen Deng, Hongyu Wang","doi":"10.1109/TVCG.2025.3617961","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3617961","url":null,"abstract":"<p><p>Gaussian SLAM systems excel in real-time rendering and fine-grained reconstruction compared to NeRF-based systems. However, their reliance on extensive keyframes is impractical for deployment in real-world robotic systems, which typically operate under sparse-view conditions that can result in substantial holes in the map. To address these challenges, we introduce DenseSplat, the first SLAM system that effectively combines the advantages of NeRF and 3DGS. DenseSplat utilizes sparse keyframes and NeRF priors for initializing primitives that densely populate maps and seamlessly fill gaps. It also implements geometry-aware primitive sampling and pruning strategies to manage granularity and enhance rendering efficiency. Moreover, DenseSplat integrates loop closure and bundle adjustment, significantly enhancing frame-to-frame tracking accuracy. Extensive experiments on multiple large-scale datasets demonstrate that DenseSplat achieves superior performance in tracking and mapping compared to current state-of-the-art methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Object Height, Shadow and Adapting Luminance on Outdoor Depth Perception in Augmented Reality. 增强现实中物体高度、阴影和自适应亮度对室外深度感知的影响
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-06 DOI: 10.1109/TVCG.2025.3616839
Shining Ma, Chaochao Liu, Jingyuan Wang, Yue Liu, Yongtian Wang, Weitao Song
{"title":"Influence of Object Height, Shadow and Adapting Luminance on Outdoor Depth Perception in Augmented Reality.","authors":"Shining Ma, Chaochao Liu, Jingyuan Wang, Yue Liu, Yongtian Wang, Weitao Song","doi":"10.1109/TVCG.2025.3616839","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616839","url":null,"abstract":"<p><p>Augmented reality (AR) technology has great potential in the applications of training, exhibition, and visual guidance, all of which demand precise virtual-real registration in perceived depth. Many AR applications such as navigation and tourism guidance are usually implemented in outdoor environments. However, prior research on depth perception in AR predominantly focused on the indoor environment, characterized by a lower illumination level and more confined space compared to outdoor settings. To address this gap, this paper presented a systematic investigation into the depth perception in outdoor environments. Two experiments were conducted in this study: the first one aimed to explore how to eliminate the bias induced by the floating object and how the knowledge of object height influences the perceived depth. The second experiment examined how ambient luminance affects depth estimation in AR. Our findings revealed an overestimation of perceived depth when participants were unaware of the actual height of the floating object, but an underestimation when they were informed of this information prior to the experiment. Additionally, shadows effectively reduced depth errors regardless of whether participants were informed of the object's height. The second experiment further indicated that, in outdoor environments, reducing ambient luminance significantly improves the accuracy of depth perception in AR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing Hand and Forearm Gestures to Control Virtual Forearm for User-Initiated Forearm Deformation. 设计手和前臂手势控制虚拟前臂的用户发起的前臂变形。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616825
Yilong Lin, Han Shi, Weitao Jiang, Xuesong Zhang, Hye-Young Jo, Yoonji Kim, Seungwoo Je
{"title":"Designing Hand and Forearm Gestures to Control Virtual Forearm for User-Initiated Forearm Deformation.","authors":"Yilong Lin, Han Shi, Weitao Jiang, Xuesong Zhang, Hye-Young Jo, Yoonji Kim, Seungwoo Je","doi":"10.1109/TVCG.2025.3616825","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616825","url":null,"abstract":"<p><p>Thanks to the development of virtual reality (VR) technology, there is growing research on VR avatar body deformation effects. However, previous research mainly focused on passive body deformation expression, leaving users with limited methods to actively control their virtual bodies. To address this gap, we explored user-controlled forearm deformation by investigating how hand and forearm gestures can be mapped to various degrees of avatar forearm deformation. We conducted a gesture design workshop with six designers to generate gesture sets for different forearm deformations and deformation degrees, resulting in 15 gesture sets. Then, we selected the three highest-rated gesture sets and conducted a comparative study to evaluate the sense of embodiment and user performance across the three gesture sets. Our findings provide design suggestions for gesture-controlled forearm deformation in VR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信