IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
ESIQA: Perceptual Quality Assessment of Vision-Pro-based Egocentric Spatial Images.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549174
Xilei Zhu, Liu Yang, Huiyu Duan, Xiongkuo Min, Guangtao Zhai, Patrick Le Callet
{"title":"ESIQA: Perceptual Quality Assessment of Vision-Pro-based Egocentric Spatial Images.","authors":"Xilei Zhu, Liu Yang, Huiyu Duan, Xiongkuo Min, Guangtao Zhai, Patrick Le Callet","doi":"10.1109/TVCG.2025.3549174","DOIUrl":"10.1109/TVCG.2025.3549174","url":null,"abstract":"<p><p>With the development of eXtended Reality (XR), photo capturing and display technology based on head-mounted displays (HMDs) have experienced significant advancements and gained considerable attention. Egocentric spatial images and videos are emerging as a compelling form of stereoscopic XR content. The assessment for the Quality of Experience (QoE) of XR content is important to ensure a high-quality viewing experience. Different from traditional 2D images, egocentric spatial images present challenges for perceptual quality assessment due to their special shooting, processing methods, and stereoscopic characteristics However, the corresponding image quality assessment (IQA) research for egocentric spatial images is still lacking. In this paper, we establish the Egocentric Spatial Images Quality Assessment Database (ESIQAD), the first IQA database dedicated for egocentric spatial images as far as we know. Our ESIQAD includes 500 egocentric spatial images and the corresponding mean opinion scores (MOSs) under three display modes, including 2D display, 3D-window display, and 3D-immersive display. Based on our ESIQAD, we propose a novel mamba2-based multi-stage feature fusion model, termed ESIQAnet, which predicts the perceptual quality of egocentric spatial images under the three display modes. Specifically, we first extract features from multiple visual state space duality (VSSD) blocks, then apply cross attention to fuse binocular view information and use transposed attention to further refine the features. The multi-stage features are finally concatenated and fed into a quality regression network to predict the quality score. Extensive experimental results demonstrate that the ESIQAnet outperforms 22 state-of-the-art IQA models on the ESIQAD under all three display modes. The database and code are available at https://github.com/IntMeGroup/ESIQA.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Hidden Face of the Proteus Effect: Deindividuation, Embodiment and Identification. 普洛特斯效应的隐藏面:去个性化、体现和认同。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549849
Anna Martin Coesel, Beatrice Biancardi, Mukesh Barange, Stephanie Buisine
{"title":"The Hidden Face of the Proteus Effect: Deindividuation, Embodiment and Identification.","authors":"Anna Martin Coesel, Beatrice Biancardi, Mukesh Barange, Stephanie Buisine","doi":"10.1109/TVCG.2025.3549849","DOIUrl":"10.1109/TVCG.2025.3549849","url":null,"abstract":"<p><p>The Proteus effect describes how users of virtual environments adjust their attitudes to match stereotypes associated with their avatar's appearance. While numerous studies have demonstrated this phenomenon's reliability, its underlying processes remain poorly understood. This work investigates deindividuation's hypothesized but unproven role within the Proteus effect. Deindividuated individuals tend to follow situational norms rather than personal ones. Therefore, together with high embodiment and identification processes, deindividuation may lead to a stronger Proteus effect. We present two experimental studies. First, we demonstrated the emergence of the Proteus effect in a real-world academic context: engineering students got better scores in a statistical task when embodying Albert Einstein's avatar compared to a control one. In the second study, we tested the role of deindividuation by manipulating participants' exposure to different identity cues during the task. While we could not find a significant effect of deindividuation on the participants' performance, our results highlight an unexpected pattern, with embodiment as a negative predictor and identification as a positive predictor of performance. These results open avenues for further research on the processes involved in the Proteus effect, particularly those focused on the relation between the avatar and the nature of the task to be performed. All supplemental materials are available at https://osf.io/au3wk/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SummonBrush: Enhancing Touch Interaction on Large XR User Interfaces by Augmenting Users' Hands with Virtual Brushes. SummonBrush:用虚拟画笔增强用户的手,从而增强大型 XR 用户界面上的触摸交互。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549553
Yang Tian, Zhao Su, Tianren Luo, Teng Han, Shengdong Zhao, Youpeng Zhang, Yixin Wang, BoYu Gao, Dangxiao Wang
{"title":"SummonBrush: Enhancing Touch Interaction on Large XR User Interfaces by Augmenting Users' Hands with Virtual Brushes.","authors":"Yang Tian, Zhao Su, Tianren Luo, Teng Han, Shengdong Zhao, Youpeng Zhang, Yixin Wang, BoYu Gao, Dangxiao Wang","doi":"10.1109/TVCG.2025.3549553","DOIUrl":"10.1109/TVCG.2025.3549553","url":null,"abstract":"<p><p>Touch interaction is one of the fundamental interaction paradigms in XR, as users have become very familiar with touch interactions on physical touchscreens. However, users typically need to perform extensive arm movements for engaging with XR user interfaces much larger than mobile device touchscreens. We propose the SummonBrush technique to facilitate easy access to hidden windows while interacting with large XR user interfaces, requiring minimal arm movements. The SummonBrush technique adds a virtual brush to the index fingertip of a user's hand. Upon making contact with a virtual user interface, the brush bends and diverges and ink starts to diffuse in it. The more the brush bends and diverges, the more the ink diffuses. The user can summon hidden windows or background applications in situ, which is achieved by firstly pressing the brush against the user interface to make ink fully fill the brush and then perform swipe gestures. Also, the user can press the brush against the thumbtails of background applications in situ to quickly cycle them through. Ecological studies showed that SummonBrush significantly reduced the arm movement time by 39% and 34% in summoning hidden windows and activating/closing background applications, respectively, leading to a significant decrease in reported physical demand.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scaling Techniques for Exocentric Navigation Interfaces in Multiscale Virtual Environments.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549535
Jong-In Lee, Wolfgang Stuerzlinger
{"title":"Scaling Techniques for Exocentric Navigation Interfaces in Multiscale Virtual Environments.","authors":"Jong-In Lee, Wolfgang Stuerzlinger","doi":"10.1109/TVCG.2025.3549535","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549535","url":null,"abstract":"<p><p>Navigating multiscale virtual environments necessitates an interaction method to travel across different levels of scale (LoS). Prior research has studied various techniques that enable users to seamlessly adjust their scale to navigate between different LoS based on specific user contexts. We introduce a scroll-based scale control method optimized for exocentric navigation, targeted at scenarios where speed and accuracy in continuous scaling are crucial. We pinpoint the challenges of scale control in settings with multiple LoS and evaluate how distinct designs of scaling techniques influence navigation performance and usability. Through a user study, we investigated two pivotal elements of a scaling technique: the input method and the scaling center. Our findings indicate that our scroll-based input method significantly reduces task completion time and error rate and enhances efficiency compared to the most frequently used bi-manual method. Moreover, we found that the choice of scaling center affects the ease of use of the scaling method, especially when paired with specific input methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain Signatures of Time Perception in Virtual Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549570
Sahar Niknam, Saravanakumar Duraisamy, Jean Botev, Luis A Leiva
{"title":"Brain Signatures of Time Perception in Virtual Reality.","authors":"Sahar Niknam, Saravanakumar Duraisamy, Jean Botev, Luis A Leiva","doi":"10.1109/TVCG.2025.3549570","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549570","url":null,"abstract":"<p><p>Achieving a high level of immersion and adaptation in virtual reality (VR) requires precise measurement and representation of user state. While extrinsic physical characteristics such as locomotion and pose can be accurately tracked in real-time, reliably capturing mental states is more challenging. Quantitative psychology allows considering more intrinsic features like emotion, attention, or cognitive load. Time perception, in particular, is strongly tied to users' mental states, including stress, focus, and boredom. However, research on objectively measuring the pace at which we perceive the passage of time is scarce. In this work, we investigate the potential of electroencephalography (EEG) as an objective measure of time perception in VR, exploring neural correlates with oscillatory responses and time-frequency analysis. To this end, we implemented a variety of time perception modulators in VR, collected EEG recordings, and labeled them with overestimation, correct estimation, and underestimation time perception states. We found clear EEG spectral signatures for these three states, that are persistent across individuals, modulators, and modulation duration. These signatures can be integrated and applied to monitor and actively influence time perception in VR, allowing the virtual environment to be purposefully adapted to the individual to increase immersion further and improve user experience. A free copy of this paper and all supplemental materials are available at https://vrarlab.uni.lu/pub/brain-signatures.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549887
Peter Kullmann, Theresa Schell, Timo Menzel, Mario Botsch, Marc Erich Latoschik
{"title":"Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness.","authors":"Peter Kullmann, Theresa Schell, Timo Menzel, Mario Botsch, Marc Erich Latoschik","doi":"10.1109/TVCG.2025.3549887","DOIUrl":"10.1109/TVCG.2025.3549887","url":null,"abstract":"<p><p>Facial expressions are crucial for many eXtended Reality (XR) use cases, from mirrored self exposures to social XR, where users interact via their avatars as digital alter egos. However, current XR devices differ in sensor coverage of the face region. Hence, a faithful reconstruction of facial expressions either has to exclude these areas or synthesize missing animation data with model-based approaches, potentially leading to perceivable mismatches between executed and perceived expression. This paper investigates potential effects of the coverage of facial animations (none, partial, or whole) on important factors of self-perception. We exposed 83 participants to their mirrored personalized avatar. They were shown their mirrored avatar face with upper and lower face animation, upper face animation only, lower face animation only, or no face animation. Whole animations were rated higher in virtual embodiment and slightly lower in uncanniness. Missing animations did not differ from partial ones in terms of virtual embodiment. Contrasts showed significantly lower humanness, lower eeriness, and lower attractiveness for the partial conditions. For questions related to self-identification, effects were mixed. We discuss participants' shift in body part attention across conditions. Qualitative results show participants perceived their virtual representation as fascinating yet uncanny.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Layer Gaussian Splatting for Immersive Anatomy Visualization.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549882
Constantin Kleinbeck, Hannah Schieber, Klaus Engel, Ralf Gutjahr, Daniel Roth
{"title":"Multi-Layer Gaussian Splatting for Immersive Anatomy Visualization.","authors":"Constantin Kleinbeck, Hannah Schieber, Klaus Engel, Ralf Gutjahr, Daniel Roth","doi":"10.1109/TVCG.2025.3549882","DOIUrl":"10.1109/TVCG.2025.3549882","url":null,"abstract":"<p><p>In medical image visualization, path tracing of volumetric medical data like computed tomography (CT) scans produces lifelike three-dimensional visualizations. Immersive virtual reality (VR) displays can further enhance the understanding of complex anatomies. Going beyond the diagnostic quality of traditional 2D slices, they enable interactive 3D evaluation of anatomies, supporting medical education and planning. Rendering high-quality visualizations in real-time, however, is computationally intensive and impractical for compute-constrained devices like mobile headsets. We propose a novel approach utilizing Gaussian Splatting (GS) to create an efficient but static intermediate representation of CT scans. We introduce a layered GS representation, incrementally including different anatomical structures while minimizing overlap and extending the GS training to remove inactive Gaussians. We further compress the created model with clustering across layers. Our approach achieves interactive frame rates while preserving anatomical structures, with quality adjustable to the target hardware. Compared to standard GS, our representation retains some of the explorative qualities initially enabled by immersive path tracing. Selective activation and clipping of layers are possible at rendering time, adding a degree of interactivity to otherwise static GS models. This could enable scenarios where high computational demands would otherwise prohibit using path-traced medical volumes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ResponsiveView: Enhancing 3D Artifact Viewing Experience in VR Museums. ResponsiveView:增强 VR 博物馆中的 3D 文物观看体验。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549872
Xueqi Wang, Yue Li, Boge Ling, Han-Mei Chen, Hai-Ning Liang
{"title":"ResponsiveView: Enhancing 3D Artifact Viewing Experience in VR Museums.","authors":"Xueqi Wang, Yue Li, Boge Ling, Han-Mei Chen, Hai-Ning Liang","doi":"10.1109/TVCG.2025.3549872","DOIUrl":"10.1109/TVCG.2025.3549872","url":null,"abstract":"<p><p>The viewing experience of 3D artifacts in Virtual Reality (VR) museums is constrained and affected by various factors, such as pedestal height, viewing distance, and object scale. User experiences regarding these factors can vary subjectively, making it difficult to identify a universal optimal solution. In this paper, we collect empirical data on user-determined parameters for the optimal viewing experience in VR museums. By modeling users' viewing behaviors in VR museums, we derive predictive functions that configure the pedestal height, calculate the optimal viewing distance, and adjust the appropriate handheld scale for the optimal viewing experience. This led to our novel 3D responsive design, ResponsiveView. Similar to the responsive web design that automatically adjusts for different screen sizes, ResponsiveView automatically adjusts the parameters in the VR environment to facilitate users' viewing experience. The design has been validated with two popular inputs available in current commercial VR devices: controller-based interactions and hand tracking, demonstrating enhanced viewing experience in VR museums.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating Stereo Rendering via Image Reprojection and Spatio-Temporal Supersampling.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549557
Sipeng Yang, Junhao Zhuge, Jiayu Ji, Qingchuan Zhu, Xiaogang Jin
{"title":"Accelerating Stereo Rendering via Image Reprojection and Spatio-Temporal Supersampling.","authors":"Sipeng Yang, Junhao Zhuge, Jiayu Ji, Qingchuan Zhu, Xiaogang Jin","doi":"10.1109/TVCG.2025.3549557","DOIUrl":"10.1109/TVCG.2025.3549557","url":null,"abstract":"<p><p>Achieving immersive virtual reality (VR) experiences typically requires extensive computational resources to ensure highdefinition visuals, high frame rates, and low latency in stereoscopic rendering. This challenge is particularly pronounced for lower-tier and standalone VR devices with limited processing power. To accelerate rendering, existing supersampling and image reprojection techniques have shown significant potential, yet to date, no previous work has explored their combination to minimize stereo rendering overhead. In this paper, we introduce a lightweight supersampling framework that integrates image projection with spatio-temporal supersampling to accelerate stereo rendering. Our approach effectively leverages the temporal and spatial redundancies inherent in stereo videos, enabling rapid image generation for unshaded viewpoints and providing resolution-enhanced and anti-aliased images for binocular viewpoints. We first blend a rendered low-resolution (LR) frame with accumulated temporal samples to construct an high-resolution (HR) frame. This HR frame is then reprojected to the other viewpoint to directly synthesize a new image. To address disocclusions in reprojected images, we utilize accumulated history data and low-pass filtering for filling, ensuring high-quality results with minimal delay. Extensive evaluations on both the PC and the standalone device confirm that our framework requires short runtime to generate high-fidelity images, making it an effective solution for stereo rendering across various VR platforms.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Setting the Stage: Using Virtual Reality to Assess the Effects of Music Performance Anxiety in Pianists.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549843
Nicalia ThompSon, Xueni Pan, Maria Herrojo Ruiz
{"title":"Setting the Stage: Using Virtual Reality to Assess the Effects of Music Performance Anxiety in Pianists.","authors":"Nicalia ThompSon, Xueni Pan, Maria Herrojo Ruiz","doi":"10.1109/TVCG.2025.3549843","DOIUrl":"10.1109/TVCG.2025.3549843","url":null,"abstract":"<p><p>Music Performance Anxiety (MPA) is highly prevalent among musicians and often debilitating, associated with changes in cognitive, emotional, behavioral, and physiological responses to performance situations. Efforts have been made to create simulated performance environments in conservatoires and Virtual Reality (VR) to assess their effectiveness in managing MPA. Despite these advances, results have been mixed, underscoring the need for controlled experimental designs and joint analyses of performance, physiology, and subjective ratings in these settings. Furthermore, the broader application of simulated performance environments for at-home use and laboratory studies on MPA remains limited. We designed VR scenarios to induce MPA in pianists and embedded them within a controlled within-subject experimental design to systematically assess their effects on performance, physiology, and anxiety ratings. Twenty pianists completed a performance task under two conditions: a public 'Audition' and a private 'Studio' rehearsal. Participants experienced VR pre-performance settings before transitioning to live piano performances in the real world. We measured subjective anxiety, performance (MIDI data), and heart rate variability (HRV). Compared to the Studio condition, pianists in the Audition condition reported higher somatic anxiety ratings and demonstrated an increase in performance accuracy over time, with a reduced error rate. Additionally, their performances were faster and featured increased note intensity. No concurrent changes in HRV were observed. These results validate the potential of VR to induce MPA, enhancing pitch accuracy and invigorating tempo and dynamics. We discuss the strengths and limitations of this approach to develop VR-based interventions to mitigate the debilitating effects of MPA.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信