IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Experiencing Immersive Virtual Nature for Well-Being, Restoration, Performance, and Nature Connectedness: a Scoping Review. 体验沉浸式虚拟自然的福祉,恢复,性能和自然连接:范围审查。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616762
Jeewoo Kim, Svara Patel, Hyeongil Nam, Janghee Cho, Kangsoo Kim
{"title":"Experiencing Immersive Virtual Nature for Well-Being, Restoration, Performance, and Nature Connectedness: a Scoping Review.","authors":"Jeewoo Kim, Svara Patel, Hyeongil Nam, Janghee Cho, Kangsoo Kim","doi":"10.1109/TVCG.2025.3616762","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616762","url":null,"abstract":"<p><p>This paper presents a scoping review of immersive virtual nature experiences delivered via head-mounted displays (HMDs) and their role in promoting well-being, psychological restoration, cognitive performance, and nature connectedness. As access to natural environments becomes increasingly constrained by urbanization, technological lifestyles, and environmental change, immersive technologies offer a scalable and accessible alternative for engaging with nature. Guided by three core research questions, this review explores how HMD-mediated immersive technologies have been used to promote nature connectedness and well-being, what trends and outcomes have been observed across applications, and what methodological gaps or limitations exist in this growing body of work. Fifty-five peer-reviewed studies were analyzed and categorized into six key implication areas: emotional well-being, stress reduction, cognitive performance, attention recovery, restorative benefits, and nature connectedness. The review identifies immersive virtual nature as a promising application of extended reality (XR) technologies, with potential across healthcare, education, and daily life, while also emphasizing the need for more consistent methodologies and long-term research.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Effects of Augmented Reality Guidance Position within a Body-Fixed Coordinate System on Pedestrian Navigation. 在人体固定坐标系下增强现实导引位置对行人导航的影响研究。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616773
Shunbo Wang, Qing Xu, Klaus Schoeffmann
{"title":"Exploring the Effects of Augmented Reality Guidance Position within a Body-Fixed Coordinate System on Pedestrian Navigation.","authors":"Shunbo Wang, Qing Xu, Klaus Schoeffmann","doi":"10.1109/TVCG.2025.3616773","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616773","url":null,"abstract":"<p><p>AR head-mounted displays (HMDs) facilitate pedestrian navigation by integrating AR guidance into users' field of view (FOV). Displaying AR guidance using a body-fixed coordinate system has the potential to further leverage this integration by enabling users to control when the guidance appears in their FOV. However, it remains unclear how to effectively position AR guidance within this coordinate system during pedestrian navigation. Therefore, we explored the effects of three AR guidance positions (top, middle, and bottom) within a body-fixed coordinate system on pedestrian navigation in a virtual environment. Our results showed that AR guidance position significantly influenced eye movements, walking behaviors, and subjective evaluations. The top position resulted in the shortest duration of fixations on the guidance compared to the middle and bottom positions, and lower mental demand than the bottom position. The middle position had the smallest rate of vertical eye movement during gaze shifts between the guidance and the environment, and the smallest relative difference in walking speed between fixations on the guidance and the environment compared to the top and bottom positions. The bottom position led to the shortest duration and smallest amplitude of gaze shifts between the guidance and the environment compared to the top and middle positions, and lower frustration than the top position. Based on these findings, we offer design implications for AR guidance positioning within a body-fixed coordinate system during pedestrian navigation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Transitional Mixed Reality Interfaces: A Co-design Study with Flood-prone Communities. 过渡混合现实界面的应用:与洪水易发社区的协同设计研究。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616755
Zhiling Jie, Geert Lugtenberg, Renjie Zhang, Armin Teubert, Makoto Fujisawa, Hideaki Uchiyama, Kiyoshi Kiyokawa, Isidro Butaslac, Taishi Sawabe, Hirokazu Kato
{"title":"Application of Transitional Mixed Reality Interfaces: A Co-design Study with Flood-prone Communities.","authors":"Zhiling Jie, Geert Lugtenberg, Renjie Zhang, Armin Teubert, Makoto Fujisawa, Hideaki Uchiyama, Kiyoshi Kiyokawa, Isidro Butaslac, Taishi Sawabe, Hirokazu Kato","doi":"10.1109/TVCG.2025.3616755","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616755","url":null,"abstract":"<p><p>Flood risk communication in disaster-prone communities often relies on traditional tools (e.g., paper and browser-based hazard/flood maps) that struggle to engage community stakeholders and reflect intuitive flood situations. In this paper, we applied the transitional mixed reality (MR) interface concept from pioneering work and extended it for flood risk communication scenarios through co-design with community stakeholders to help vulnerable residents understand flood risk and facilitate preparedness. Starting with an initial transitional MR prototype, we conducted three iterative workshops - each dedicated to device usability, visualization techniques, and interaction methods. We collaborated with diverse community stakeholders in flood-prone areas, collecting feedback to refine the system according to community needs. Our preliminary evaluation indicates that this co-designed system significantly improves user understanding and engagement compared to traditional tools, though some older residents faced usability challenges. We detailed this iterative co-design process, critical insights and design implications, offering our work as a practical case of mixed reality application in strengthening flood risk communication. We also discuss the system's potential to support community-driven collaboration in flood preparedness.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Source-Free Model Adaptation for Unsupervised 3D Object Retrieval. 无监督三维对象检索的无源模型自适应。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3617082
Dan Song, Yiyao Wu, Yuting Ling, Diqiong Jiang, Yao Jin, Ruofeng Tong
{"title":"Source-Free Model Adaptation for Unsupervised 3D Object Retrieval.","authors":"Dan Song, Yiyao Wu, Yuting Ling, Diqiong Jiang, Yao Jin, Ruofeng Tong","doi":"10.1109/TVCG.2025.3617082","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3617082","url":null,"abstract":"<p><p>With the explosive growth of 3D objects yet expensive annotation costs, unsupervised 3D object retrieval has become a popular but challenging research area. Existing labeled resources have been utilized to aid this task via transfer learning, which aligns the distribution of unlabeled data with the source one. However, the labeled resource are not always accessible due to the privacy disputes, limited computational capacity and other thorny restrictions. Therefore, we propose source-free model adaptation task for unsupervised 3D object management, which utilizes a pre-trained model to boost the performance with no access to source data and labels. Specifically, we compute representative prototypes to assume the source feature distribution, and design a bidirectional cumulative confidence-based adaptation strategy to adaptively align unlabeled samples towards prototypes. Subsequently, a dual-model distillation mechanism is proposed to generate source hypothesis for remedying the absence of ground-truth labels. The experiments on a cross-domain retrieval benchmark NTU-PSB (PSB-NTU) and a cross-modality retrieval benchmark MI3DOR also demonstrate the superiority of the proposed method even without access to raw data. Code is available at: https://github.com/Wyyspace1203/MA.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SGSG: Stroke-Guided Scene Graph Generation. 描边引导场景图生成。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616751
Qixiang Ma, Runze Fan, Lizhi Zhao, Jian Wu, Sio-Kei Im, Lili Wang
{"title":"SGSG: Stroke-Guided Scene Graph Generation.","authors":"Qixiang Ma, Runze Fan, Lizhi Zhao, Jian Wu, Sio-Kei Im, Lili Wang","doi":"10.1109/TVCG.2025.3616751","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616751","url":null,"abstract":"<p><p>3D scene graph generation is essential for spatial computing in Extended Reality (XR), providing structured semantics for task planning and intelligent perception. However, unlike instance-segmentation-driven setups, generating semantic scene graphs still suffer from limited accuracy due to coarse and noisy point cloud data typically acquired in practice, and from the lack of interactive strategies to incorporate users, spatialized and intuitive guidance. We identify three key challenges: designing controllable interaction forms, involving guidance in inference, and generalizing from local corrections. To address these, we propose SGSG, a Stroke-Guided Scene Graph generation method that enables users to interactively refine 3D semantic relationships and improve predictions in real time. We propose three types of strokes and a lightweight SGstrokes dataset tailored for this modality. Our model integrates stroke guidance representation and injection for spatio-temporal feature learning and reasoning correction, along with intervention losses that combine consistency-repulsive and geometry-sensitive constraints to enhance accuracy and generalization. Experiments and the user study show that SGSG outperforms state-of-the-art methods 3DSSG and SGFN in overall accuracy and precision, surpasses JointSSG in predicate-level metrics, and reduces task load across all control conditions, establishing SGSG as a new benchmark for interactive 3D scene graph generation and semantic understanding in XR. Implementation resources are available at: https://github.com/Sycamore-Ma/SGSG-runtime.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive Intergroup Contact: Using Virtual Reality to Enhance Empathy and Reduce Stigma towards Schizophrenia. 沉浸式群体间接触:使用虚拟现实增强共情并减少对精神分裂症的耻辱感。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616759
Jiaqi Yin, Shihan Liu, Shao-Wen Lee, Andreas Kitsios, Marco Gillies, Michele Denise Birtel, Harry Farmer, Xueni Pan
{"title":"Immersive Intergroup Contact: Using Virtual Reality to Enhance Empathy and Reduce Stigma towards Schizophrenia.","authors":"Jiaqi Yin, Shihan Liu, Shao-Wen Lee, Andreas Kitsios, Marco Gillies, Michele Denise Birtel, Harry Farmer, Xueni Pan","doi":"10.1109/TVCG.2025.3616759","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616759","url":null,"abstract":"<p><p>Stigma towards individuals with schizophrenia reduces quality of life, creating a barrier to accessing education and employment opportunities. Schizophrenia is one of the most stigmatized mental health conditions, and stigma is prevalent particularly among healthcare professionals. In this study, we investigated whether Virtual Reality (VR) can be incorporated into interventions to reduce stigma. In particular, we compared the effectiveness of three VR conditions based on intergroup contact theory in reducing stigma in form of implicit and explicit attitudes, and behavioral intentions. Through an immersive virtual consultation in a clinical setting, participants (N = 60) experienced one of three different conditions: the Doctor's perspective (embodiment in a majority group member during contact), the Patient's perspective (embodiment in a minority group member) and a Third-person perspective (vicarious contact). Results demonstrated an increase of stigma on certain explicit measures (perceived recovery and social restriction) but also an increase of empathy (perspective-taking, empathic concern) across all conditions regardless of perspective. More importantly, participants' viewpoint influenced the desire for social distance differently depending on the perspective: the Third-person observation significantly increased the desire for social distance, Doctor embodiment marginally decreased it, while Patient embodiment showed no significant change. No change was found in the Implicit Association Test. These findings suggest that VR intergroup contact can effectively reduce certain dimensions of stigma toward schizophrenia, but the type of perspective experienced significantly impacts outcomes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Avatar-Locomotion Congruence on User Experience and Identification in Virtual Reality. 虚拟现实中角色-运动一致性对用户体验和识别的影响。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616836
Omar Khan, Hyeongil Nam, Kangsoo Kim
{"title":"Impact of Avatar-Locomotion Congruence on User Experience and Identification in Virtual Reality.","authors":"Omar Khan, Hyeongil Nam, Kangsoo Kim","doi":"10.1109/TVCG.2025.3616836","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616836","url":null,"abstract":"<p><p>As virtual reality (VR) continues to expand, particularly in social VR platforms and immersive gaming environments, understanding the factors that shape user experience is becoming increasingly important. Avatars and locomotion methods play central roles in influencing how users identify with their virtual representations and navigate virtual spaces. Despite extensive research on these elements individually, their relationship remains underexplored. In particular, little is known about how congruence between avatar appearance and locomotion method affects user perceptions. This study investigates the impact of avatar-locomotion congruence on user experience and avatar identification in VR. We conducted a within-subjects experiment with 30 participants, employing two visually distinct avatar types (human and gorilla) and two locomotion methods (human-like arm-swinging and gorilla-like arm-rolling), to assess their individual and combined effects. Our results indicate that congruence between avatar appearance and locomotion method enhances both avatar identification and user experience. These findings contribute to the understanding of the relationship between avatars and locomotion in VR, with potential applications in enhancing user experience in immersive gaming, social VR, and gamified remote physical therapy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HFM-GS: half-face mapping 3DGS avatar based real-time HMD removal. HFM-GS:基于半脸映射3DGS头像的实时HMD移除。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616801
Kangyu Wang, Jian Wu, Runze Fan, Hongwen Zhang, Sio Kei Im, Lili Wang
{"title":"HFM-GS: half-face mapping 3DGS avatar based real-time HMD removal.","authors":"Kangyu Wang, Jian Wu, Runze Fan, Hongwen Zhang, Sio Kei Im, Lili Wang","doi":"10.1109/TVCG.2025.3616801","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616801","url":null,"abstract":"<p><p>In extended reality (XR) applications, enhancing user perception often necessitates head-mounted display (HMD) removal. However, existing methods suffer from low time performance and suboptimal reconstruction quality. In this paper, we propose a half face mapping 3D Gaussian splatting avatar based HMD removal method (HFM-GS), which can perform real-time and high-fidelity online restoration of the complete face in HMD-occluded videos for XR applications after a short un-occluded face registration. We establish a mapping field between the upper and lower face Gaussians to enhance the adaptability to deformation. Then, we introduce correlation weight-based sampling to improve time performance and handle variations in the number of Gaussians. At last, we ensure model robustness through Gaussian Segregation Strategy. Compared to two state-of-the-art methods, our method achieves better quality and time performance. The results of the user study show that fidelity is significantly improved with our method.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Realism on Hand Redirection in Immersive Environments. 沉浸式环境中现实主义对手部重定向的影响。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616743
Shuqi Liao, Yuqi Zhou, Voicu Popescu
{"title":"The Effect of Realism on Hand Redirection in Immersive Environments.","authors":"Shuqi Liao, Yuqi Zhou, Voicu Popescu","doi":"10.1109/TVCG.2025.3616743","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616743","url":null,"abstract":"<p><p>Redirection in virtual reality (VR) enhances haptic feedback versatility by relaxing the need for precise alignment between virtual and physical objects. In mixed reality (MR), where users see the real world and their own hands, haptic redirection enables a physical interaction with virtual objects but poses greater challenges due to altering real-world perception. This paper investigates the effect of the realism of the user's surroundings and of the user's hand on haptic redirection. The user's familiarity with their actual physical surroundings and their actual hand could make the redirection manipulations easier-or harder-to detect. In a user study (N = 30) participants saw either a virtual environment or their actual physical surroundings, and saw their hand rendered either with a generic 3D model or with a live 2D video sprite of their actual hand. The study used a two-alternative forced choice (2AFC) design asking participants to detect hand redirections that bridged physical to virtual offsets of varying magnitudes. The results show that participants were not more sensitive to 2D video sprite hand redirection than to VR hand redirection, which supports the use of haptic redirection in MR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Viewpoint-Tolerant Depth Perception for Shared Extended Space Experience on Wall-Sized Display. 在墙壁大小的显示器上共享扩展空间体验的视点容忍深度感知。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616758
Dooyoung Kim, Jinseok Hong, Heejeong Ko, Woontack Woo
{"title":"Viewpoint-Tolerant Depth Perception for Shared Extended Space Experience on Wall-Sized Display.","authors":"Dooyoung Kim, Jinseok Hong, Heejeong Ko, Woontack Woo","doi":"10.1109/TVCG.2025.3616758","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616758","url":null,"abstract":"<p><p>We proposed viewpoint-tolerant shared depth perception without individual tracking by leveraging human cognitive compensation in universally 3D rendered images on a wall-sized display. While traditional 3D-perception-enabled display systems have primarily focused on single-user scenarios-adapting rendering based on head and eye tracking-the use of wall-sized displays to extend spatial experiences and support perceptually coherent multi-user interactions remains underexplored. We investigated the effects of virtual depths (dv) and absolute viewing distance (da) on human cognitive compensation factors (perceived distance difference, viewing angle threshold, and perceived presence) to construct the wall display-based eXtended Reality (XR) space. Results show that participants experienced a compelling depth perception even from off-center angles of 23°-37°, and largely increasing virtual depth worsens depth perception and presence factors, highlighting the importance of balancing extended depth of virtual space and viewing distance from the wall-sized display. Drawing on these findings, wall-sized displays in venues such as museums, galleries, and classrooms can evolve beyond 2D information sharing to offer immersive, spatially extended group experiences without individualized tracking or wearables.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信