IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Immersive Intergroup Contact: Using Virtual Reality to Enhance Empathy and Reduce Stigma towards Schizophrenia. 沉浸式群体间接触:使用虚拟现实增强共情并减少对精神分裂症的耻辱感。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616759
Jiaqi Yin, Shihan Liu, Shao-Wen Lee, Andreas Kitsios, Marco Gillies, Michele Denise Birtel, Harry Farmer, Xueni Pan
{"title":"Immersive Intergroup Contact: Using Virtual Reality to Enhance Empathy and Reduce Stigma towards Schizophrenia.","authors":"Jiaqi Yin, Shihan Liu, Shao-Wen Lee, Andreas Kitsios, Marco Gillies, Michele Denise Birtel, Harry Farmer, Xueni Pan","doi":"10.1109/TVCG.2025.3616759","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616759","url":null,"abstract":"<p><p>Stigma towards individuals with schizophrenia reduces quality of life, creating a barrier to accessing education and employment opportunities. Schizophrenia is one of the most stigmatized mental health conditions, and stigma is prevalent particularly among healthcare professionals. In this study, we investigated whether Virtual Reality (VR) can be incorporated into interventions to reduce stigma. In particular, we compared the effectiveness of three VR conditions based on intergroup contact theory in reducing stigma in form of implicit and explicit attitudes, and behavioral intentions. Through an immersive virtual consultation in a clinical setting, participants (N = 60) experienced one of three different conditions: the Doctor's perspective (embodiment in a majority group member during contact), the Patient's perspective (embodiment in a minority group member) and a Third-person perspective (vicarious contact). Results demonstrated an increase of stigma on certain explicit measures (perceived recovery and social restriction) but also an increase of empathy (perspective-taking, empathic concern) across all conditions regardless of perspective. More importantly, participants' viewpoint influenced the desire for social distance differently depending on the perspective: the Third-person observation significantly increased the desire for social distance, Doctor embodiment marginally decreased it, while Patient embodiment showed no significant change. No change was found in the Implicit Association Test. These findings suggest that VR intergroup contact can effectively reduce certain dimensions of stigma toward schizophrenia, but the type of perspective experienced significantly impacts outcomes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Avatar-Locomotion Congruence on User Experience and Identification in Virtual Reality. 虚拟现实中角色-运动一致性对用户体验和识别的影响。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616836
Omar Khan, Hyeongil Nam, Kangsoo Kim
{"title":"Impact of Avatar-Locomotion Congruence on User Experience and Identification in Virtual Reality.","authors":"Omar Khan, Hyeongil Nam, Kangsoo Kim","doi":"10.1109/TVCG.2025.3616836","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616836","url":null,"abstract":"<p><p>As virtual reality (VR) continues to expand, particularly in social VR platforms and immersive gaming environments, understanding the factors that shape user experience is becoming increasingly important. Avatars and locomotion methods play central roles in influencing how users identify with their virtual representations and navigate virtual spaces. Despite extensive research on these elements individually, their relationship remains underexplored. In particular, little is known about how congruence between avatar appearance and locomotion method affects user perceptions. This study investigates the impact of avatar-locomotion congruence on user experience and avatar identification in VR. We conducted a within-subjects experiment with 30 participants, employing two visually distinct avatar types (human and gorilla) and two locomotion methods (human-like arm-swinging and gorilla-like arm-rolling), to assess their individual and combined effects. Our results indicate that congruence between avatar appearance and locomotion method enhances both avatar identification and user experience. These findings contribute to the understanding of the relationship between avatars and locomotion in VR, with potential applications in enhancing user experience in immersive gaming, social VR, and gamified remote physical therapy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HFM-GS: half-face mapping 3DGS avatar based real-time HMD removal. HFM-GS:基于半脸映射3DGS头像的实时HMD移除。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616801
Kangyu Wang, Jian Wu, Runze Fan, Hongwen Zhang, Sio Kei Im, Lili Wang
{"title":"HFM-GS: half-face mapping 3DGS avatar based real-time HMD removal.","authors":"Kangyu Wang, Jian Wu, Runze Fan, Hongwen Zhang, Sio Kei Im, Lili Wang","doi":"10.1109/TVCG.2025.3616801","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616801","url":null,"abstract":"<p><p>In extended reality (XR) applications, enhancing user perception often necessitates head-mounted display (HMD) removal. However, existing methods suffer from low time performance and suboptimal reconstruction quality. In this paper, we propose a half face mapping 3D Gaussian splatting avatar based HMD removal method (HFM-GS), which can perform real-time and high-fidelity online restoration of the complete face in HMD-occluded videos for XR applications after a short un-occluded face registration. We establish a mapping field between the upper and lower face Gaussians to enhance the adaptability to deformation. Then, we introduce correlation weight-based sampling to improve time performance and handle variations in the number of Gaussians. At last, we ensure model robustness through Gaussian Segregation Strategy. Compared to two state-of-the-art methods, our method achieves better quality and time performance. The results of the user study show that fidelity is significantly improved with our method.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Realism on Hand Redirection in Immersive Environments. 沉浸式环境中现实主义对手部重定向的影响。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616743
Shuqi Liao, Yuqi Zhou, Voicu Popescu
{"title":"The Effect of Realism on Hand Redirection in Immersive Environments.","authors":"Shuqi Liao, Yuqi Zhou, Voicu Popescu","doi":"10.1109/TVCG.2025.3616743","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616743","url":null,"abstract":"<p><p>Redirection in virtual reality (VR) enhances haptic feedback versatility by relaxing the need for precise alignment between virtual and physical objects. In mixed reality (MR), where users see the real world and their own hands, haptic redirection enables a physical interaction with virtual objects but poses greater challenges due to altering real-world perception. This paper investigates the effect of the realism of the user's surroundings and of the user's hand on haptic redirection. The user's familiarity with their actual physical surroundings and their actual hand could make the redirection manipulations easier-or harder-to detect. In a user study (N = 30) participants saw either a virtual environment or their actual physical surroundings, and saw their hand rendered either with a generic 3D model or with a live 2D video sprite of their actual hand. The study used a two-alternative forced choice (2AFC) design asking participants to detect hand redirections that bridged physical to virtual offsets of varying magnitudes. The results show that participants were not more sensitive to 2D video sprite hand redirection than to VR hand redirection, which supports the use of haptic redirection in MR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Viewpoint-Tolerant Depth Perception for Shared Extended Space Experience on Wall-Sized Display. 在墙壁大小的显示器上共享扩展空间体验的视点容忍深度感知。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616758
Dooyoung Kim, Jinseok Hong, Heejeong Ko, Woontack Woo
{"title":"Viewpoint-Tolerant Depth Perception for Shared Extended Space Experience on Wall-Sized Display.","authors":"Dooyoung Kim, Jinseok Hong, Heejeong Ko, Woontack Woo","doi":"10.1109/TVCG.2025.3616758","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616758","url":null,"abstract":"<p><p>We proposed viewpoint-tolerant shared depth perception without individual tracking by leveraging human cognitive compensation in universally 3D rendered images on a wall-sized display. While traditional 3D-perception-enabled display systems have primarily focused on single-user scenarios-adapting rendering based on head and eye tracking-the use of wall-sized displays to extend spatial experiences and support perceptually coherent multi-user interactions remains underexplored. We investigated the effects of virtual depths (dv) and absolute viewing distance (da) on human cognitive compensation factors (perceived distance difference, viewing angle threshold, and perceived presence) to construct the wall display-based eXtended Reality (XR) space. Results show that participants experienced a compelling depth perception even from off-center angles of 23°-37°, and largely increasing virtual depth worsens depth perception and presence factors, highlighting the importance of balancing extended depth of virtual space and viewing distance from the wall-sized display. Drawing on these findings, wall-sized displays in venues such as museums, galleries, and classrooms can evolve beyond 2D information sharing to offer immersive, spatially extended group experiences without individualized tracking or wearables.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ZonAware: Identifying Zoning Out and Increasing Engagement in Upper Limb Virtual Reality Rehabilitation. 区域感知:识别分区和增加上肢虚拟现实康复的参与。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616818
Kai-Lun Liao, Mengjie Huang, Jiajia Shi, Min Chen, Rui Yang
{"title":"ZonAware: Identifying Zoning Out and Increasing Engagement in Upper Limb Virtual Reality Rehabilitation.","authors":"Kai-Lun Liao, Mengjie Huang, Jiajia Shi, Min Chen, Rui Yang","doi":"10.1109/TVCG.2025.3616818","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616818","url":null,"abstract":"<p><p>Zoning out, a form of cognitive disengagement, seriously challenges the effectiveness of virtual reality (VR) based upper limb rehabilitation. As therapy often involves repetitive tasks requiring sustained attention, undetected lapses in focus can reduce motor learning, engagement, and overall recovery outcomes. This research addresses this gap by proposing ZonAware, a novel strategy integrating real-time zoning out detection with adaptive intervention to enhance user engagement during VR rehabilitation. ZonAware identifies zoning out using five eye-tracking metrics: blink frequency, blink duration, pupil size, eye openness, and gaze duration. These signals are analysed through lightweight statistical models (Z-Score, Boxplot, and Modified Z-Score), with a hard voting mechanism producing binary classifications in real-time. Upon detection, a pattern changing intervention subtly modulates task difficulty by temporarily increasing, then decreasing it, to regain user focus without breaking immersion. Three user studies involving 70 healthy participants and 22 patients demonstrated the strategy's effectiveness. ZonAware achieved 98.24% detection accuracy with low latency (82-150 ms), reducing zoning out frequency by 53.57% and shortening disengagement duration from 18.1 to 4.8 seconds. The approach also improved user engagement, performance, and emotional motivation. ZonAware delivers one of the first real-time zoning out solutions for VR rehabilitation, offering an interpretable, theory-driven approach that enhances attention, engagement, and adaptability in human-computer interaction.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RP-SLAM: Real-time Photorealistic SLAM with Efficient 3D Gaussian Splatting. RP-SLAM:实时逼真的SLAM与高效的三维高斯飞溅。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-01 DOI: 10.1109/TVCG.2025.3616173
Lizhi Bai, Chunqi Tian, Jun Yang, Siyu Zhang, Masanori Suganuma, Takayuki Okatani
{"title":"RP-SLAM: Real-time Photorealistic SLAM with Efficient 3D Gaussian Splatting.","authors":"Lizhi Bai, Chunqi Tian, Jun Yang, Siyu Zhang, Masanori Suganuma, Takayuki Okatani","doi":"10.1109/TVCG.2025.3616173","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616173","url":null,"abstract":"<p><p>3D Gaussian Splatting has emerged as a promising technique for high-quality 3D rendering, leading to increasing interest in integrating 3DGS into realism SLAM systems. However, existing methods face challenges such as Gaussian primitives redundancy, forgetting problem during continuous optimization, and difficulty in initializing primitives in monocular case due to lack of depth information. In order to achieve efficient and photorealistic mapping, we propose RP-SLAM, a 3D Gaussian splatting-based vision SLAM method for monocular and RGB-D cameras. RP-SLAM decouples camera poses estimation from Gaussian primitives optimization and consists of three key components. Firstly, we propose an efficient incremental mapping approach to achieve a compact and accurate representation of the scene through adaptive sampling and Gaussian primitives filtering. Secondly, a dynamic window optimization method is proposed to mitigate the forgetting problem and improve map consistency. Finally, for the monocular case, a monocular keyframe initialization method based on sparse point cloud is proposed to improve the initialization accuracy of Gaussian primitives, which provides a geometric basis for subsequent optimization. The results of numerous experiments demonstrate that RP-SLAM achieves state-of-the-art map rendering accuracy while ensuring real-time performance and model compactness.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SeG-Gaussian:Segmentation-Guided 3D Gaussian Optimization for Novel View Synthesis. SeG-Gaussian:基于分割引导的新视图合成三维高斯优化。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-29 DOI: 10.1109/TVCG.2025.3615421
Ling-Xiao Zhang, Chenbo Jiang, Yu-Kun Lai, Lin Gao
{"title":"SeG-Gaussian:Segmentation-Guided 3D Gaussian Optimization for Novel View Synthesis.","authors":"Ling-Xiao Zhang, Chenbo Jiang, Yu-Kun Lai, Lin Gao","doi":"10.1109/TVCG.2025.3615421","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3615421","url":null,"abstract":"<p><p>Radiance field based methods have recently revolutionized novel view synthesis of scenes captured with multi-view photos. A significant recent advance is 3D Gaussian Splatting (3DGS), which utilizes a set of 3D Gaussians to represent a radiance field, yielding high-fidelity results in real-time rendering. However, we have observed that 3DGS struggles to capture the necessary details in sparsely observed regions, where there is not enough gradient for effective split and clone operations. In this paper, we present a novel solution to address this limitation. Our key idea is to leverage segmentation information to identify poorly optimized regions within the 3D Gaussian representation. By applying split or clone operations on the corresponding 3D Gaussians in these regions, we aim to refine the spatial distribution of Gaussians and enhance the overall quality of high-fidelity 3D scene reconstruction. To further optimize the reconstruction process, we introduce two spatial regularization terms: repulsion loss and smoothness loss. These terms effectively minimize overlap and redundancy among Gaussians, reducing outliers in the synthesized geometry. By incorporating these regularization techniques, our approach achieves state-of-the-art performance in real-time novel view synthesis and significantly improves visibility in less observed regions, leading to a more compact and accurate 3D scene representation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145194203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Semantic Talking Style Space for Speech-driven Facial Animation. 语音驱动面部动画的语义谈话风格空间。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-29 DOI: 10.1109/TVCG.2025.3615390
Yujin Chai, Yanlin Weng, Tianjia Shao, Kun Zhou
{"title":"A Semantic Talking Style Space for Speech-driven Facial Animation.","authors":"Yujin Chai, Yanlin Weng, Tianjia Shao, Kun Zhou","doi":"10.1109/TVCG.2025.3615390","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3615390","url":null,"abstract":"<p><p>We present a latent talking style space with semantic meanings for speech-driven 3D facial animation. The style space is learned from 3D speech facial animations via a self-supervision paradigm without any style labeling, leading to an automatic separation of high-level attributes, i.e., different channels of the latent style code possess different semantic meanings, such as a wide/slightly open mouth, a grinning/round mouth, and frowning/raising eyebrows. The style space enables intuitive and flexible control of talking styles in speech-driven facial animation through manipulating the channels of style code. To effectively learn such a style space, we propose a two-stage approach, involving two deep neural networks, to disentangle the person identity, speech content, and talking style contained in 3D speech facial animations. The training is performed on a novel dataset of 3D talking faces of various styles, constructed from over ten hours of videos of 200 subjects collected from the Internet.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145194251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distortion-aware Brushing for Reliable Cluster Analysis in Multidimensional Projections. 多维投影中可靠聚类分析的失真感知刷刷。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-29 DOI: 10.1109/TVCG.2025.3615314
Hyeon Jeon, Michael Aupetit, Soohyun Lee, Kwon Ko, Youngtaek Kim, Ghulam Jilani Quadri, Jinwook Seo
{"title":"Distortion-aware Brushing for Reliable Cluster Analysis in Multidimensional Projections.","authors":"Hyeon Jeon, Michael Aupetit, Soohyun Lee, Kwon Ko, Youngtaek Kim, Ghulam Jilani Quadri, Jinwook Seo","doi":"10.1109/TVCG.2025.3615314","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3615314","url":null,"abstract":"<p><p>Brushing is a common interaction technique in 2D scatterplots, allowing users to select clustered points within a continuous, enclosed region for further analysis or filtering. However, applying conventional brushing to 2D representations of multidimensional (MD) data, i.e., Multidimensional Projections (MDPs), can lead to unreliable cluster analysis due to MDP-induced distortions that inaccurately represent the cluster structure of the original MD data. To alleviate this problem, we introduce a novel brushing technique for MDPs called Distortion-aware brushing. As users perform brushing, Distortion-aware brushing correct distortions around the currently brushed points by dynamically relocating points in the projection, pulling data points close to the brushed points in MD space while pushing distant ones apart. This dynamic adjustment helps users brush MD clusters more accurately, leading to more reliable cluster analysis. Our user studies with 24 participants show that Distortion-aware brushing significantly outperforms previous brushing techniques for MDPs in accurately separating clusters in the MD space and remains robust against distortions. We further demonstrate the effectiveness of our technique through two use cases: (1) conducting cluster analysis of geospatial data and (2) interactively labeling MD clusters.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145194218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信