IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Robust and Efficient Preservation of High-order Continuous Geometric Validity. 高阶连续几何有效性的鲁棒高效保持。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-08-26 DOI: 10.1109/TVCG.2025.3603025
Wei Du, Shibo Liu, Jia-Peng Guo, Ligang Liu, Xiao-Ming Fu
{"title":"Robust and Efficient Preservation of High-order Continuous Geometric Validity.","authors":"Wei Du, Shibo Liu, Jia-Peng Guo, Ligang Liu, Xiao-Ming Fu","doi":"10.1109/TVCG.2025.3603025","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3603025","url":null,"abstract":"<p><p>We propose a novel method to robustly and efficiently compute the maximum allowable step sizes so that the 3D high-order finite elements continuously preserve geometric validity when moving along the given directions with positive step sizes smaller than the computed ones. We transform the problem of finding the maximum allowable step sizes to one of solving roots of cubic polynomials. To use interval arithmetic to avoid numerical issues in cubic equation solving, we completely enumerate the roots of cubic polynomials and apply the interval version of the Newton-Raphson iteration. The effectiveness of our algorithm is demonstrated through extensive testing. Compared to the state-of-the-art method, our algorithm achieves higher efficiency.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144983956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UMATO: Bridging Local and Global Structures for Reliable Visual Analytics with Dimensionality Reduction. 桥接本地和全球结构的可靠视觉分析与降维。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-08-25 DOI: 10.1109/TVCG.2025.3602735
Hyeon Jeon, Kwon Ko, Soohyun Lee, Jake Hyun, Taehyun Yang, Gyehun Go, Jaemin Jo, Jinwook Seo
{"title":"UMATO: Bridging Local and Global Structures for Reliable Visual Analytics with Dimensionality Reduction.","authors":"Hyeon Jeon, Kwon Ko, Soohyun Lee, Jake Hyun, Taehyun Yang, Gyehun Go, Jaemin Jo, Jinwook Seo","doi":"10.1109/TVCG.2025.3602735","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3602735","url":null,"abstract":"<p><p>Due to the intrinsic complexity of high-dimensional (HD) data, dimensionality reduction (DR) techniques cannot preserve all the structural characteristics of the original data. Therefore, DR techniques focus on preserving either local neighborhood structures (local techniques) or global structures such as pairwise distances between points (global techniques). However, both approaches can mislead analysts to erroneous conclusions about the overall arrangement of manifolds in HD data. For example, local techniques may exaggerate the compactness of individual manifolds, while global techniques may fail to separate clusters that are well-separated in the original space. In this research, we provide a deeper insight into Uniform Manifold Approximation with Two-phase Optimization (UMATO), a DR technique that addresses this problem by effectively capturing local and global structures. UMATO achieves this by dividing the optimization process of UMAP into two phases. In the first phase, it constructs a skeletal layout using representative points, and in the second phase, it projects the remaining points while preserving the regional characteristics. Quantitative experiments validate that UMATO outperforms widely used DR techniques, including UMAP, in terms of global structure preservation, with a slight loss in local structure. We also confirm that UMATO outperforms baseline techniques in terms of scalability and stability against initialization and subsampling, making it more effective for reliable HD data analysis. Finally, we present a case study and a qualitative demonstration that highlight UMATO's effectiveness in generating faithful projections, enhancing the overall reliability of visual analytics using DR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144983984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChronoDeck: A Visual Analytics Approach for Hierarchical Time Series Analysis. ChronoDeck:层次时间序列分析的可视化分析方法。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-08-25 DOI: 10.1109/TVCG.2025.3602273
Lingyu Meng, Shuhan Liu, Keyi Yang, Jiabin Xu, Zikun Deng, Di Weng, Yingcai Wu
{"title":"ChronoDeck: A Visual Analytics Approach for Hierarchical Time Series Analysis.","authors":"Lingyu Meng, Shuhan Liu, Keyi Yang, Jiabin Xu, Zikun Deng, Di Weng, Yingcai Wu","doi":"10.1109/TVCG.2025.3602273","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3602273","url":null,"abstract":"<p><p>Hierarchical time series data comprises a collection of time series aggregated at multiple levels based on categorical, geographical, or physical constraints, the analysis of which aids analysts across various domains like retail, finance, and energy, in gaining valuable insights and making informed decisions. However, existing interactive exploratory analysis approaches for hierarchical time series data fall short in analyzing time series across different aggregation levels and supporting more complex analytical tasks beyond common ones like summarize and compare. These limitations motivate us to develop a new visual analytics approach. We first generalize a taxonomy to delineate various tasks in hierarchical time series analysis, derived from literature survey and expert interviews. Based on this taxonomy, we develop ChronoDeck, an interactive system that incorporates a multi-column hierarchical time series visualization for implementing various analytical tasks and distilling insights from the data. ChronoDeck visualizes each aggregation level of hierarchical time series with a combination of coordinated dimensionality reduction and small multiples visualizations, alongside interactions including highlight, align, filter, and select, assisting users in the visualization, comparison, and transformation of hierarchical time series, as well as identifying the entities of interest. The effectiveness of ChronoDeck is demonstrated by case studies on three real-world datasets and expert interviews.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144984012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CausalChat: Interactive Causal Model Development and Refinement Using Large Language Models. CausalChat:使用大型语言模型开发和改进交互式因果模型。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-08-25 DOI: 10.1109/TVCG.2025.3602448
Yanming Zhang, Akshith Kota, Eric Papenhausen, Klaus Mueller
{"title":"CausalChat: Interactive Causal Model Development and Refinement Using Large Language Models.","authors":"Yanming Zhang, Akshith Kota, Eric Papenhausen, Klaus Mueller","doi":"10.1109/TVCG.2025.3602448","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3602448","url":null,"abstract":"<p><p>Causal networks are widely used in many fields to model the complex relationships between variables. A recent approach has sought to construct causal networks by leveraging the wisdom of crowds through the collective participation of humans. While this can yield detailed causal networks that model the underlying phenomena quite well, it requires a large number of individuals with domain understanding. We adopt a different approach: leveraging the causal knowledge that large language models, such as OpenAI's GPT-4, have learned by ingesting massive amounts of literature. Within a dedicated visual analytics interface, called CausalChat, users explore single variables or variable pairs recursively to identify causal relations, latent variables, confounders, and mediators, constructing detailed causal networks through conversation. Each probing interaction is translated into a tailored GPT-4 prompt and the response is conveyed through visual representations which are linked to the generated text for explanations. We demonstrate the functionality of CausalChat across diverse data contexts and conduct user studies involving both domain experts and laypersons.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144983981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GECO: Fast Generative Image-to-3D within one SECOnd. GECO:快速生成图像到3d在一秒钟内。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-08-25 DOI: 10.1109/TVCG.2025.3602405
Chen Wang, Jiatao Gu, Xiaoxiao Long, Yuan Liu, Lingjie Liu
{"title":"GECO: Fast Generative Image-to-3D within one SECOnd.","authors":"Chen Wang, Jiatao Gu, Xiaoxiao Long, Yuan Liu, Lingjie Liu","doi":"10.1109/TVCG.2025.3602405","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3602405","url":null,"abstract":"<p><p>Recent advancements in single-image 3D generation have produced two main categories of methods: reconstruction-based and generative methods. Reconstruction-based methods are efficient but lack uncertainty handling, leading to blurry artifacts in unseen regions. Generative approaches that based on score distillation [47], [71] are slow due to scene-specific optimization. Other methods, like InstantMesh [76], use a two-stage process - generating multi-view images with a diffusion model and then reconstructing 3D - which is inefficient due to multiple denoising steps of the diffusion model. To overcome these limitations, we introduce GECO, a feed-forward method for fast and high-quality single-image-to-3D generation within one second on a single GPU. Our approach resolves uncertainty and inefficiency issues through a two-stage distillation process. In the first stage, we distill a multi-step diffusion model [56] into a one-step model using score distillation for single-image-to-multi-view synthesis. To mitigate the synthesis quality degradation caused by the one-step model, we introduce a second distillation stage to learn to predict high-quality 3D from imperfect multi-view generated images by performing distillation directly on 3D representations. Experiments demonstrate that GECO offers significant speed improvements and comparable reconstruction quality compared to prior two-stage methods. Code: https://cwchenwang.github.io/geco.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144983953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TransGI: Real-Time Dynamic Global Illumination with Object-Centric Neural Transfer Model. TransGI:实时动态全局照明与对象为中心的神经传递模型。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-08-22 DOI: 10.1109/TVCG.2025.3596146
Yijie Deng, Lei Han, Lu Fang
{"title":"TransGI: Real-Time Dynamic Global Illumination with Object-Centric Neural Transfer Model.","authors":"Yijie Deng, Lei Han, Lu Fang","doi":"10.1109/TVCG.2025.3596146","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3596146","url":null,"abstract":"<p><p>Neural rendering algorithms have revolutionized computer graphics, yet their impact on real-time rendering under arbitrary lighting conditions remains limited due to strict latency constraints in practical applications. The key challenge lies in formulating a compact yet expressive material representation. To address this, we propose TransGI, a novel neural rendering method for real-time, high-fidelity global illumination. It comprises an object-centric neural transfer model for material representation and a radiance-sharing lighting system for efficient illumination. Traditional BSDF representations and spatial neural material representations lack expressiveness, requiring thousands of ray evaluations to converge to noise-free colors. Conversely, realtime methods trade quality for efficiency by supporting only diffuse materials. In contrast, our object-centric neural transfer model achieves compactness and expressiveness through an MLPbased decoder and vertex-attached latent features, supporting glossy effects with low memory overhead. For dynamic, varying lighting conditions, we introduce local light probes capturing scene radiance, coupled with an across-probe radiance-sharing strategy for efficient probe generation. We implemented our method in a real-time rendering engine, combining compute shaders and CUDA-based neural networks. Experimental results demonstrate that our method achieves real-time performance of less than 10 ms to render a frame and significantly improved rendering quality compared to baseline methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144983927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey on Annotations in Information Visualization: Empirical Studies, Applications and Challenges. 信息可视化中标注研究综述:实证研究、应用与挑战
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-08-20 DOI: 10.1109/TVCG.2025.3600957
Md Dilshadur Rahman, Bhavana Doppalapudi, Ghulam Jilani Quadri, Paul Rosen
{"title":"A Survey on Annotations in Information Visualization: Empirical Studies, Applications and Challenges.","authors":"Md Dilshadur Rahman, Bhavana Doppalapudi, Ghulam Jilani Quadri, Paul Rosen","doi":"10.1109/TVCG.2025.3600957","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3600957","url":null,"abstract":"<p><p>Annotations are widely used in information visualization to guide attention, clarify patterns, and support interpretation. We present a comprehensive survey of 191 research papers describing empirical studies, tools, techniques, and systems that incorporate annotations across various visualization contexts. Based on a structured analysis, we characterize annotations by their types, generation methods, and targets, and examine their use across four primary application domains: user engagement, storytelling, collaboration, and exploratory data analysis. We also discuss key trends, practical challenges, and open research directions. These findings offer a foundation for designing more effective annotation systems and advancing future research on annotation in visualization. An interactive web resource detailing the surveyed papers is available at https://shape-vis.github.io/annotation star/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144983941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StrucADT: Generating Structure-controlled 3D Point Clouds with Adjacency Diffusion Transformer. StrucADT:生成结构控制的三维点云邻接扩散变压器。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-08-19 DOI: 10.1109/TVCG.2025.3600392
Zhenyu Shu, Jiajun Shen, Zhongui Chen, Xiaoguang Han, Shiqing Xin
{"title":"StrucADT: Generating Structure-controlled 3D Point Clouds with Adjacency Diffusion Transformer.","authors":"Zhenyu Shu, Jiajun Shen, Zhongui Chen, Xiaoguang Han, Shiqing Xin","doi":"10.1109/TVCG.2025.3600392","DOIUrl":"10.1109/TVCG.2025.3600392","url":null,"abstract":"<p><p>In the field of 3D point cloud generation, numerous 3D generative models have demonstrated the ability to generate diverse and realistic 3D shapes. However, the majority of these approaches struggle to generate controllable 3D point cloud shapes that meet user-specific requirements, hindering the large-scale application of 3D point cloud generation. To address the challenge of lacking control in 3D point cloud generation, we are the first to propose controlling the generation of point clouds by shape structures that comprise part existences and part adjacency relationships. We manually annotate the adjacency relationships between the segmented parts of point cloud shapes, thereby constructing a StructureGraph representation. Based on this StructureGraph representation, we introduce StrucADT, a novel structure-controllable point cloud generation model, which consists of StructureGraphNet module to extract structure-aware latent features, cCNF Prior module to learn the distribution of the latent features controlled by the part adjacency, and Diffusion Transformer module conditioned on the latent features and part adjacency to generate structure-consistent point cloud shapes. Experimental results demonstrate that our structure-controllable 3D point cloud generation method produces high-quality and diverse point cloud shapes, enabling the generation of controllable point clouds based on user-specified shape structures and achieving state-of-the-art performance in controllable point cloud generation on the ShapeNet dataset.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144884668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pseudo Label Learning for Partial Point Cloud Registration. 局部点云配准的伪标签学习。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-08-19 DOI: 10.1109/TVCG.2025.3600395
Wenping Ma, Yifan Sun, Yue Wu, Yue Zhang, Hao Zhu, Biao Hou, Licheng Jiao
{"title":"Pseudo Label Learning for Partial Point Cloud Registration.","authors":"Wenping Ma, Yifan Sun, Yue Wu, Yue Zhang, Hao Zhu, Biao Hou, Licheng Jiao","doi":"10.1109/TVCG.2025.3600395","DOIUrl":"10.1109/TVCG.2025.3600395","url":null,"abstract":"<p><p>Partial point cloud registration plays a crucial role in computer vision and has widespread applications in 3D map construction, pose estimation, and high-precision localization. However, the collected point clouds often contain missing data due to hardware limitations and complex environments. Various partial registration algorithms have been proposed, most of which rely on estimating overlap regions. However, a significant proportion of these algorithms rely heavily on ground truth labels. Manual labeling is both time-consuming and labor-intensive, whereas algorithmic automatic labeling lacks sufficient accuracy. To tackle this issue, we present PSEudo Label learning for unsupervised partial point cloud registration (PSEL). This method utilizes complementary tasks to learn reliable pseudo labels for overlap regions and correspondences without depending on ground truth labels. The key idea is to use the complementarity between overlap estimation and registration to generate two types of pseudo labels based on the nearest points in pairs of aligned point clouds. These pseudo labels are then employed to supervise the learning of overlap regions and correspondences, gradually enhancing their accuracy throughout the learning process and ultimately establishing an unsupervised learning framework. PSEL consists of an overlap estimation module and a correspondence filtering module. The pseudo labels generated after registration are used to supervise both modules. Notably, the correspondence filtering module has two pipelines. The similarity and difference of the corresponding point features are used to eliminate false correspondences during the training and inference stages, respectively, with only the latter being optimized with pseudo labels. To validate the effectiveness of our registration method, we conducted experiments using the synthetic dataset ModelNet40, the indoor dataset 3DMatch, and the outdoor dataset KITTI. The code is available at https://github.com/yifans923/PSEL.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144884667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VIVA: Virtual Healthcare Interactions Using Visual Analytics, With Controllability Through Configuration. VIVA:使用可视化分析的虚拟医疗交互,通过配置具有可控性。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-08-15 DOI: 10.1109/TVCG.2025.3599458
Jurgen Bernard, Mara Solen, Helen Novak Lauscher, Kurtis Stewart, Kendall Ho, Tamara Munzner
{"title":"VIVA: Virtual Healthcare Interactions Using Visual Analytics, With Controllability Through Configuration.","authors":"Jurgen Bernard, Mara Solen, Helen Novak Lauscher, Kurtis Stewart, Kendall Ho, Tamara Munzner","doi":"10.1109/TVCG.2025.3599458","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3599458","url":null,"abstract":"<p><p>At the beginning of the COVID-19 pandemic, HealthLink BC (HLBC) rapidly integrated physicians into the triage process of their virtual healthcare service to improve patient outcomes and satisfaction with this service and preserve health care system capacity. We present the design and implementation of a visual analytics tool, VIVA (Virtual healthcare Interactions using Visual Analytics), to support HLBC in analysing various forms of usage data from the service. We abstract HLBC's data and data analysis tasks, which we use to inform our design of VIVA. We also present the interactive workflow abstraction of Scan, Act, Adapt. We validate VIVA's design through three case studies with stakeholder domain experts. We also propose the Controllability Through Configuration model to conduct and analyze design studies, and discuss architectural evolution of VIVA through that lens. It articulates configuration, both that specified by a developer or technical power user and that constructed automatically through log data from previous interactive sessions, as a bridge between the rigidity of hardwired programming and the time-consuming implementation of full end-user interactivity. Availability: Supplemental materials at https://osf.io/wv38n.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144860021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信