IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
MR-CoCo: an Open Mixed Reality Testbed for Co-located Couple Product Configuration and Decision-Making - A Sailboat Case Study. MR-CoCo:一个开放的混合现实测试平台,用于共同定位的一对产品配置和决策——一个帆船案例研究。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616734
Fabio Vangi, Daniel Medeiros, Mine Dastan, Michele Fiorentino
{"title":"MR-CoCo: an Open Mixed Reality Testbed for Co-located Couple Product Configuration and Decision-Making - A Sailboat Case Study.","authors":"Fabio Vangi, Daniel Medeiros, Mine Dastan, Michele Fiorentino","doi":"10.1109/TVCG.2025.3616734","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616734","url":null,"abstract":"<p><p>The literature has demonstrated the advantages of Mixed Reality (MR) for product configuration by providing a more engaging and effective end-user experience. While collaborative and remote design tools in MR have been widely explored in previous studies, a noticeable gap remains in the exploration of co-located product configuration for couples. This gap is noteworthy since in many industries, couples (e.g., friends, partners) often make purchasing decisions together in physical retail environments. In this paper, we introduce MR-CoCo, an open MR testbed designed to explore collaborative configurations by co-located couples, both in the role of customers. The testbed is developed in Unity and features: (i) a shared MR space with virtual product 3D model anchoring, (ii) shared visualization of the current configuration, (iii) a versatile UI for selecting configuration areas, (iv) hand gestures for 3D drag and drop of colors and materials from 3D catalog to the product. A case study of the personalization of a sailboat is provided as proof of concept. The user study involved 24 couples (48 participants in total), simulating a purchasing experience and the related configuration using MR-CoCo. We assessed usability through post-experience evaluations, with the System Usability Scale (SUS) and the Co-Presence Configuration Questionnaire (CCQ) to measure collaboration and decision-making. The results demonstrated a high level of usability and perceived quality of collaboration. We also explore guidelines that can be used for remote collaboration applications, enabling configuration across a wide range of industries (e.g., automotive and clothing).</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bundling-Aware Graph Drawing Revisited. 重新审视捆绑感知图形绘制。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616583
Markus Wallinger, Tommaso Piselli, Alessandra Tappini, Daniel Archambault, Giuseppe Liotta, Martin Nollenburg
{"title":"Bundling-Aware Graph Drawing Revisited.","authors":"Markus Wallinger, Tommaso Piselli, Alessandra Tappini, Daniel Archambault, Giuseppe Liotta, Martin Nollenburg","doi":"10.1109/TVCG.2025.3616583","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616583","url":null,"abstract":"<p><p>Edge bundling algorithms can significantly improve the visualization of dense graphs by identifying and bundling together suitable groups of edges and thus reducing visual clutter. As such, bundling is often viewed as a post-processing step applied to a drawing, and the vast majority of edge bundling algorithms consider a graph and its drawing as input. A different way of thinking about edge bundling is to simultaneously optimize both the drawing and the bundling, which we investigate in this paper. We build on an earlier work where we introduced a novel algorithmic framework for bundling-aware graph drawing consisting of three main steps, namely Filter for a skeleton subgraph, Draw the skeleton, and Bundle the remaining edges against the drawing of the skeleton. We propose several alternative implementations and experimentally compare them against each other and the simple idea of first drawing the full graph and subsequently applying edge bundling to it. The experiments confirm that bundled drawings created by our Filter-Draw-Bundle framework outperform previous approaches according to metrics for edge bundling and graph drawing.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WarpVision: Using Spatial Curvature to Guide Attention in Virtual Reality. WarpVision:在虚拟现实中使用空间曲率引导注意力。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616806
Jerome Kudnick, Martin Weier, Colin Groth, Biying Fu, Robin Horst
{"title":"WarpVision: Using Spatial Curvature to Guide Attention in Virtual Reality.","authors":"Jerome Kudnick, Martin Weier, Colin Groth, Biying Fu, Robin Horst","doi":"10.1109/TVCG.2025.3616806","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616806","url":null,"abstract":"<p><p>With the advent of consumer-targeted, low-cost virtual reality devices and facile authoring technologies, the development and design of experiences in virtual reality are also becoming more accessible to non-expert authors. However, the inherent freedom of exploration in these virtual spaces presents a significant challenge for designers seeking to guide user attention toward points and objects of interest. This paper proposes the new technique WarpVision, which utilizes spatial curvature to subtly guide the user's attention in virtual reality. WarpVision distorts an area around the point of interest, thus changing the size, form, and location of all objects and the space around them. In this way, the user's attention can be guided even when the point of interest is not in the immediate field of vision. WarpVision is evaluated in a user study based on a within-subjects design, comparing it to the state-of-the-art technique Deadeye. Participants completed visual search tasks across two virtual environments being supported with WarpVision at four different intensities. Results show that WarpVision significantly reduces search times compared to Deadeye. While both techniques introduce comparable levels of immersion disruption, WarpVision has a lower reported impact on the user's well-being.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of AI-Based Real-Time Gesture Generation and Immersion on the Perception of Others and Interaction Quality in Social XR. 基于人工智能的实时手势生成和沉浸对社交XR中他人感知和交互质量的影响
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616864
Christian Merz, Niklas Krome, Carolin Wienrich, Stefan Kopp, Marc Erich Latoschik
{"title":"The Impact of AI-Based Real-Time Gesture Generation and Immersion on the Perception of Others and Interaction Quality in Social XR.","authors":"Christian Merz, Niklas Krome, Carolin Wienrich, Stefan Kopp, Marc Erich Latoschik","doi":"10.1109/TVCG.2025.3616864","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616864","url":null,"abstract":"<p><p>This study explores how people interact in dyadic social eXtended Reality (XR), focusing on two main factors: the animation type of a conversation partner's avatar and how immersed the user feels in the virtual environment. Specifically, we investigate how 1) idle behavior, 2) AI-generated gestures, and 3) motion-captured movements from a confederate (a controlled partner in the study) influence the quality of conversation and how that partner is perceived. We examined these effects in both symmetric interactions (where both participants use VR headsets and controllers) and asymmetric interactions (where one participant uses a desktop setup). We developed a social XR platform that supports asymmetric device configurations to provide varying levels of immersion. The platform also supports a modular avatar animation system providing idle behavior, real-time AI-generated co-speech gestures, and full-body motion capture. Using a 2×3 mixed design with 39 participants, we measured users' sense of spatial presence, their perception of the confederate, and the overall conversation quality. Our results show that users who were more immersed felt a stronger sense of presence and viewed their partner as more human-like and believable. Surprisingly, however, the type of avatar animation did not significantly affect conversation quality or how the partner was perceived. Participants often reported focusing more on what was said rather than how the avatar moved.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Effects of Haptic Illusions in Collaborative Virtual Reality. 触觉错觉在协同虚拟现实中的作用研究。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616760
Yannick Weiss, Julian Rasch, Jonas Fischer, Florian Muller
{"title":"Investigating the Effects of Haptic Illusions in Collaborative Virtual Reality.","authors":"Yannick Weiss, Julian Rasch, Jonas Fischer, Florian Muller","doi":"10.1109/TVCG.2025.3616760","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616760","url":null,"abstract":"<p><p>Our sense of touch plays a crucial role in physical collaboration, yet rendering realistic haptic feedback in collaborative extended reality (XR) remains a challenge. Co-located XR systems predominantly rely on prefabricated passive props that provide high-fidelity interaction but offer limited adaptability. Haptic Illusions (HIs), which leverage multisensory integration, have proven effective in expanding haptic experiences in single-user contexts. However, their role in XR collaboration has not been explored. To examine the applicability of HIs in multi-user scenarios, we conducted an experimental user study (N=30) investigating their effect on a collaborative object handover task in virtual reality. We manipulated visual shape and size individually and analyzed their impact on users' performance, experience, and behavior. Results show that while participants adapted to the illusions by shifting sensory reliance and employing specific sensorimotor strategies, visuo-haptic mismatches reduced both performance and experience. Moreover, mismatched visualizations in asymmetric user roles negatively impacted performance. Drawing from these findings, we provide practical guidelines for incorporating HIs into collaborative XR, marking a first step toward richer haptic interactions in shared virtual spaces.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiance Fields in XR: A Survey on How Radiance Fields are Envisioned and Addressed for XR Research. XR中的辐射场:XR研究中如何设想和处理辐射场的综述。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616794
Ke Li, Mana Masuda, Susanne Schmidt, Shohei Mori
{"title":"Radiance Fields in XR: A Survey on How Radiance Fields are Envisioned and Addressed for XR Research.","authors":"Ke Li, Mana Masuda, Susanne Schmidt, Shohei Mori","doi":"10.1109/TVCG.2025.3616794","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616794","url":null,"abstract":"<p><p>The development of radiance fields (RF), such as 3D Gaussian Splatting (3DGS) and Neural Radiance Fields (NeRF), has revolutionized interactive photorealistic view synthesis and presents enormous opportunities for XR research and applications. However, despite the exponential growth of RF research, RF-related contributions to the XR community remain sparse. To better understand this research gap, we performed a systematic survey of current RF literature to analyze (i) how RF is envisioned for XR applications, (ii) how they have already been implemented, and (iii) the remaining research gaps. We collected 365 RF contributions related to XR from computer vision, computer graphics, robotics, multimedia, human-computer interaction, and XR communities, seeking to answer the above research questions. Among the 365 papers, we performed an analysis of 66 papers that already addressed a detailed aspect of RF research for XR. With this survey, we extended and positioned XR-specific RF research topics in the broader RF research field and provide a helpful resource for the XR community to navigate within the rapid development of RF research.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of AI-Powered Embodied Avatars on Communication Quality and Social Connection in Asynchronous Virtual Meetings. 基于ai的化身对异步虚拟会议中沟通质量和社会联系的影响
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616761
Hyeongil Nam, Muskan Sarvesh, Seoyoung Kang, Woontack Woo, Kangsoo Kim
{"title":"Effects of AI-Powered Embodied Avatars on Communication Quality and Social Connection in Asynchronous Virtual Meetings.","authors":"Hyeongil Nam, Muskan Sarvesh, Seoyoung Kang, Woontack Woo, Kangsoo Kim","doi":"10.1109/TVCG.2025.3616761","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616761","url":null,"abstract":"<p><p>Immersive technologies such as virtual and augmented reality (VR/AR) allow remote users to meet and interact in a shared virtual space using embodied virtual avatars, creating a sense of co-presence. However, asynchronous communication-essential in many real-world contexts-remains underexplored in these environments. Traditional playback-based systems lack interactivity and often fail to preserve critical contextual cues necessary for effective asynchronous communication. In this paper, we introduce AVAGENTs, AI-powered virtual avatars that replicate users' verbal and nonverbal cues from recordings of past meetings. Avagents can interpret meeting context and generate appropriate responses to questions posed by asynchronous viewers. Through a user study (N =30), we evaluated Avagents against a traditional playback method and a voice-based AI assistant across two asynchronous meeting scenarios: analytic reasoning and affective resonance. Results showed that Avagents enhance the asynchronous communication experience by increasing social presence, sense of belonging, emotional intimacy, and other user perceptions. We discuss the findings and their implications for designing effective AI-driven asynchronous communication tools in VR/AR environments.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selection at a Distance through a Large Transparent Touch Screen. 远距离选择通过一个大的透明触摸屏。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616756
Sebastian Rigling, Steffen Koch, Dieter Schmalstieg, Bruce H Thomas, Michael Sedlmair
{"title":"Selection at a Distance through a Large Transparent Touch Screen.","authors":"Sebastian Rigling, Steffen Koch, Dieter Schmalstieg, Bruce H Thomas, Michael Sedlmair","doi":"10.1109/TVCG.2025.3616756","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616756","url":null,"abstract":"<p><p>Large transparent touch screens (LTTS) have recently become commercially available. These displays have the potential for engaging Augmented Reality (AR) applications, especially in public and shared spaces. However, the interaction with objects in the real environment behind the display remains challenging: Users must combine pointing and touch input if they want to select objects at varying distances. There is a lot of work on wearable or mobile AR displays, but little on how users interact with LTTS. Our goal is to contribute to a better understanding of natural user interaction for these AR displays. To this end, we developed a prototype and evaluated different pointing techniques for selecting 12 physical targets behind an LTTS, with distances ranging from 6 to 401 cm. We conducted a user study with 16 participants and measured user preferences, performance, and behavior. We analyzed the change in accuracy depending on the target position and the selection technique used. Our fndings include: (a) Users naturally align the touch point with their line of sight for targets farther than 36 cm behind the LTTS. (b) This technique provides the lowest angular deviation compared to other techniques. (c) Some user close one eye to improve their performance. Our results help to improve future AR scenarios using LTTS systems.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Augmented Reality Support for Swarm Monitoring: Evaluating Visual Cues to Prevent Fragmentation. 增强现实支持群体监测:评估视觉线索以防止碎片化。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616840
Aymeric Henard, Etienne Peillard, Jeremy Riviere, Sebastien Kubicki, Gilles Coppin
{"title":"Towards Augmented Reality Support for Swarm Monitoring: Evaluating Visual Cues to Prevent Fragmentation.","authors":"Aymeric Henard, Etienne Peillard, Jeremy Riviere, Sebastien Kubicki, Gilles Coppin","doi":"10.1109/TVCG.2025.3616840","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616840","url":null,"abstract":"<p><p>Swarm fragmentation, the breakdown of communication and coordination among robots, can critically compromise a swarm's mission. Integrating Augmented Reality support into swarm monitoring-especially through co-located visualisations anchored directly on the robots- may enable human operators to detect early signs of fragmentation and intervene effectively. In this work, we propose three localised visual cues-targeting robot connectivity, dominant decision influences, and movement direction-to make explicit the underlying Perception-Decision-Action (PDA) loop of each robot. Through an immersive Virtual Reality user study, 51 participants were tasked with both anticipating potential fragmentation and selecting the appropriate control to prevent it, while observing swarms exhibiting expansion, densification, flocking, and swarming behaviours. Our results reveal that a visualisation emphasising inter-robot connectivity significantly improves anticipation of fragmentation, though none of the cues consistently enhance control selection over a baseline condition. These findings underscore the potential of co-located AR-enhanced visual feedback to support human-swarm interaction and inform the design of future AR-based supervisory systems for robot swarms. A free copy of this paper and all supplemental materials are available at https://osf.io/49gny.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatiotemporal Calibration and Ground Truth Estimation for High-Precision SLAM Benchmarking in Extended Reality. 扩展现实中高精度SLAM基准的时空标定与地面真值估计。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616838
Zichao Shu, Shitao Bei, Lijun Li, Zetao Chen
{"title":"Spatiotemporal Calibration and Ground Truth Estimation for High-Precision SLAM Benchmarking in Extended Reality.","authors":"Zichao Shu, Shitao Bei, Lijun Li, Zetao Chen","doi":"10.1109/TVCG.2025.3616838","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616838","url":null,"abstract":"<p><p>Simultaneous localization and mapping (SLAM) plays a fundamental role in extended reality (XR) applications. As the standards for immersion in XR continue to increase, the demands for SLAM benchmarking have become more stringent. Trajectory accuracy is the key metric, and marker-based optical motion capture (MoCap) systems are widely used to generate ground truth (GT) because of their drift-free and relatively accurate measurements. However, the precision of MoCap-based GT is limited by two factors: the spatiotemporal calibration with the device under test (DUT) and the inherent jitter in the MoCap measurements. These limitations hinder accurate SLAM benchmarking, particularly for key metrics like rotation error and inter-frame jitter, which are critical for immersive XR experiences. This paper presents a novel continuous-time maximum likelihood estimator to address these challenges. The proposed method integrates auxiliary inertial measurement unit (IMU) data to compensate for MoCap jitter. Additionally, a variable time synchronization method and a pose residual based on screw congruence constraints are proposed, enabling precise spatiotemporal calibration across multiple sensors and the DUT. Experimental results demonstrate that our approach outperforms existing methods, achieving the precision necessary for comprehensive benchmarking of state-of-the-art SLAM algorithms in XR applications. Furthermore, we thoroughly validate the practicality of our method by benchmarking several leading XR devices and open-source SLAM algorithms. The code is publicly available at https://github.com/ylab-xrpg/xr-hpgt.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信