IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
A Critical Analysis of the Usage of Dimensionality Reduction in Four Domains. 对四个领域降维用法的批判性分析。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-07 DOI: 10.1109/TVCG.2025.3567989
Dylan Cashman, Mark Keller, Hyeon Jeon, Bum Chul Kwon, Qianwen Wang
{"title":"A Critical Analysis of the Usage of Dimensionality Reduction in Four Domains.","authors":"Dylan Cashman, Mark Keller, Hyeon Jeon, Bum Chul Kwon, Qianwen Wang","doi":"10.1109/TVCG.2025.3567989","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3567989","url":null,"abstract":"<p><p>Dimensionality reduction is used as an important tool for unraveling the complexities of high-dimensional datasets in many fields of science, such as cell biology, chemical informatics, and physics. Visualizations of the dimensionally-reduced data enable scientists to delve into the intrinsic structures of their datasets and align them with established hypotheses. Visualization researchers have thus proposed many dimensionality reduction methods and interactive systems designed to uncover latent structures. At the same time, different scientific domains have formulated guidelines or common workflows for using dimensionality reduction techniques and visualizations for their respective fields. In this work, we present a critical analysis of the usage of dimensionality reduction in scientific domains outside of computer science. First, we conduct a bibliometric analysis of 21,249 academic publications that use dimensionality reduction to observe differences in the frequency of techniques across fields. Next, we conduct a survey of a 71-paper sample from four fields: biology, chemistry, physics, and business. Through this survey, we uncover common workflows, processes, and usage patterns, including the mixed use of confirmatory data analysis to validate a dataset and projection method and exploratory data analysis to then generate more hypotheses. We also find that misinterpretations and inappropriate usage is common, particularly in the visual interpretation of the resulting dimensionally reduced view. Lastly, we compare our observations with recent works in the visualization community in order to match work within our community to potential areas of impact outside our community. By comparing the usage found within scientific fields to the recent research output of the visualization community, we offer both validation of the progress of visualization research into dimensionality reduction and a call for action to produce techniques that meet the needs of scientific users.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TexHOI: Reconstructing Textures of 3D Unknown Objects in Monocular Hand-Object Interaction Scenes. TexHOI:在单目手-物体交互场景中重建三维未知物体的纹理。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-06 DOI: 10.1109/TVCG.2025.3567276
Alakh Aggarwal, Ningna Wang, Xiaohu Guo
{"title":"TexHOI: Reconstructing Textures of 3D Unknown Objects in Monocular Hand-Object Interaction Scenes.","authors":"Alakh Aggarwal, Ningna Wang, Xiaohu Guo","doi":"10.1109/TVCG.2025.3567276","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3567276","url":null,"abstract":"<p><p>Reconstructing 3D models of dynamic, real-world objects with high-fidelity textures from monocular frame sequences has been a challenging problem in recent years. This difficulty stems from factors such as shadows, indirect illumination, and inaccurate object-pose estimations due to occluding hand-object interactions. To address these challenges, we propose a novel approach that predicts the hand's impact on environmental visibility and indirect illumination on the object's surface albedo. Our method first learns the geometry and low-fidelity texture of the object, hand, and background through composite rendering of radiance fields. Simultaneously, we optimize the hand and object poses to achieve accurate object-pose estimations. We then refine physics-based rendering parameters-including roughness, specularity, albedo, hand visibility, skin color reflections, and environmental illumination-to produce precise albedo, and accurate hand illumination and shadow regions. Our approach surpasses state-of-the-art methods in texture reconstruction and, to the best of our knowledge, is the first to account for hand-object interactions in object texture reconstruction. Please check our work at: https://alakhag.github.io/TexHOI-website/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ActiveAR: Augmented Reality Task Support System with Proactive Context and Virtual Content Management. ActiveAR:具有主动上下文和虚拟内容管理的增强现实任务支持系统。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-06 DOI: 10.1109/TVCG.2025.3567346
Renjie Zhang, Jia Liu, Taishi Sawabe, Yuichiro Fujimoto, Masayuki Kanbara, Hirokazu Kato
{"title":"ActiveAR: Augmented Reality Task Support System with Proactive Context and Virtual Content Management.","authors":"Renjie Zhang, Jia Liu, Taishi Sawabe, Yuichiro Fujimoto, Masayuki Kanbara, Hirokazu Kato","doi":"10.1109/TVCG.2025.3567346","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3567346","url":null,"abstract":"<p><p>Augmented Reality (AR) has long been expected to help users improve their working efficiency. However, due to the absence of intelligent systems, existing AR applications are greatly affected by the virtual content interference with real-world activities. Unlike existing work, which focuses more on hiding virtual content to reduce interference, in this work, we propose an innovative AR Task Support System where virtual contents actively guide users with task completion. During task execution, our system proactively searches for and tracks key objects in the scene, and uses this context information to automatically select appropriate virtual content and display positions. Through introducing open-world prompt-based visual models, our system can effectively retrieve few-shot or even zero-shot objects that are uncommon in the dataset. This approach extends the use of AR Task Support System beyond controlled industrial settings to more uncontrolled daily scenarios, overcoming the limitations of existing systems. It also significantly reduces development costs for developers. We demonstrate the advantages of our system over traditional virtual content management systems through a series of experiments that are closer to users' real usage situations.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape Cloud Collage on Irregular Canvas. 形状云拼贴在不规则画布。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-05 DOI: 10.1109/TVCG.2025.3566942
Sheng-Yi Yao, Dong-Yi Wu, Thi-Ngoc-Hanh Le, Tong-Yee Lee
{"title":"Shape Cloud Collage on Irregular Canvas.","authors":"Sheng-Yi Yao, Dong-Yi Wu, Thi-Ngoc-Hanh Le, Tong-Yee Lee","doi":"10.1109/TVCG.2025.3566942","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3566942","url":null,"abstract":"<p><p>This paper addresses a challenging and novel problem in 2D shape cloud visualization: arranging irregular 2D shapes on an irregular canvas to minimize gaps and overlaps while emphasizing critical shapes by displaying them in larger sizes. The concept of a shape cloud is inspired by word clouds, which are widely used in visualization research to aesthetically summarize textual datasets by highlighting significant words with larger font sizes. We extend this concept to images, introducing shape clouds as a powerful and expressive visualization tool, guided by the principle that \"a picture is worth a thousand words. Despite the potential of this approach, solutions in this domain remain largely unexplored.\" To bridge this gap, we develop a 2D shape cloud collage framework that compactly arranges 2D shapes, emphasizing important objects with larger sizes, analogous to the principles of word clouds. This task presents unique challenges, as existing 2D shape layout methods are not designed for scalable irregular packing. Applying these methods often results in suboptimal layouts, such as excessive empty spaces or inaccurate representations of the underlying data. To overcome these limitations, we propose a novel layout framework that leverages recent advances in differentiable optimization. Specifically, we formulate the irregular packing problem as an optimization task, modeling the object arrangement process as a differentiable pipeline. This approach enables fast and accurate end-to-end optimization, producing high-quality layouts. Experimental results show that our system efficiently creates visually appealing and high-quality shape clouds on arbitrary canvas shapes, outperforming existing methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144004067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting Performance Models of Distal Pointing Tasks in Virtual Reality. 虚拟现实中远端指向任务性能模型的再研究。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-05 DOI: 10.1109/TVCG.2025.3567078
Logan Lane, Feiyu Lu, Shakiba Davari, Robert J Teather, Doug A Bowman
{"title":"Revisiting Performance Models of Distal Pointing Tasks in Virtual Reality.","authors":"Logan Lane, Feiyu Lu, Shakiba Davari, Robert J Teather, Doug A Bowman","doi":"10.1109/TVCG.2025.3567078","DOIUrl":"10.1109/TVCG.2025.3567078","url":null,"abstract":"<p><p>Performance models of interaction, such as Fitts' law, are important tools for predicting and explaining human motor performance and for designing high-performance user interfaces. Extensive prior work has proposed such models for the 3D interaction task of distal pointing, in which the user points their hand or a device at a distant target in order to select it. However, there is no consensus on how to compute the index of difficulty for distal pointing tasks. We present a preliminary study suggesting that existing models may not be sufficient to model distal pointing performance with current virtual reality technologies. Based on these results, we hypothesized that both the form of the model and the standard method for collecting empirical data for pointing tasks might need to change in order to achieve a more accurate and valid distal pointing model. In our main study, we used a new methodology to collect distal pointing data and evaluated traditional models, purely ballistic models, and two-part models. Ultimately, we found that the best model used a simple Fitts'-law-style index of difficulty with angular measures of amplitude and width.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144046666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prismatic: Interactive Multi-View Cluster Analysis of Concept Stocks. 棱镜:概念股的交互式多视图聚类分析。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-05 DOI: 10.1109/TVCG.2025.3567084
Wong Kam-Kwai, Yan Luo, Xuanwu Yue, Wei Chen, Huamin Qu
{"title":"Prismatic: Interactive Multi-View Cluster Analysis of Concept Stocks.","authors":"Wong Kam-Kwai, Yan Luo, Xuanwu Yue, Wei Chen, Huamin Qu","doi":"10.1109/TVCG.2025.3567084","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3567084","url":null,"abstract":"<p><p>Financial cluster analysis allows investors to discover investment alternatives and avoid undertaking excessive risks. However, this analytical task faces substantial challenges arising from many pairwise comparisons, the dynamic correlations across time spans, and the ambiguity in deriving implications from business relational knowledge. We propose Prismatic, a visual analytics system that integrates quantitative analysis of historical performance and qualitative analysis of business relational knowledge to cluster correlated businesses interactively. Prismatic features three clustering processes: dynamic cluster generation, knowledge-based cluster exploration, and correlation-based cluster validation. Utilizing a multi-view clustering approach, it enriches data-driven clusters with knowledge-driven similarity, providing a nuanced understanding of business correlations. Through well-coordinated visual views, Prismatic facilitates a comprehensive interpretation of intertwined quantitative and qualitative features, demonstrating its usefulness and effectiveness via case studies on formulating concept stocks and extensive interviews with domain experts.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144033215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developable Approximation via Isomap on Gauss Image. 高斯图象等高线上的可展开逼近。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-05 DOI: 10.1109/TVCG.2025.3566887
Yuan-Yuan Cheng, Qing Fang, Ligang Liu, Xiao-Ming Fu
{"title":"Developable Approximation via Isomap on Gauss Image.","authors":"Yuan-Yuan Cheng, Qing Fang, Ligang Liu, Xiao-Ming Fu","doi":"10.1109/TVCG.2025.3566887","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3566887","url":null,"abstract":"<p><p>We propose a novel method to generate developable approximations for triangular meshes. Instead of fitting the Gauss image using a geodesic circle in the local neighborhood, we apply a nonlinear dimensionality reduction method, called Isomap, to use a general curve on the sphere for fitting. This brings us a larger space to represent the Gauss image in the local neighborhood as a 1D structure. Specifically, each triangle is assigned a target normal after local fitting; then, we deform the mesh to approach the target normal globally. By iteratively performing fitting and deformation, we obtain the developable approximation. We demonstrate the feasibility and effectiveness of our method over various examples. Compared to the state-of-the-art methods, our results exhibit a higher fidelity to the input mesh while possessing more prominent and visually distinct undevelopable seam curves.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144059185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSmoothFace: Generalized Smooth Talking Face Generation via Fine Grained 3D Face Guidance. GSmoothFace:通过细粒度3D面部引导生成的广义平滑说话面部。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-02 DOI: 10.1109/TVCG.2025.3566382
Haiming Zhang, Zhihao Yuan, Chaoda Zheng, Xu Yan, Baoyuan Wang, Guanbin Li, Song Wu, Shuguang Cui, Zhen Li
{"title":"GSmoothFace: Generalized Smooth Talking Face Generation via Fine Grained 3D Face Guidance.","authors":"Haiming Zhang, Zhihao Yuan, Chaoda Zheng, Xu Yan, Baoyuan Wang, Guanbin Li, Song Wu, Shuguang Cui, Zhen Li","doi":"10.1109/TVCG.2025.3566382","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3566382","url":null,"abstract":"<p><p>Although existing speech-driven talking face generation methods achieve significant progress, they are far from realworld application due to the avatar-specific training demand and unstable lip movements. To address the above issues, we propose the GSmoothFace, a novel two-stage generalized talking face generation model guided by a fine-grained 3d face model, which can synthesize smooth lip dynamics while preserving the speaker's identity. Our proposed GSmoothFace model mainly consists of the Audio to Expression Prediction (A2EP) module and the Target Adaptive Face Translation (TAFT) module. Specifically, we first develop the A2EP module to predict expression parameters synchronized with the driven speech. It uses a transformer to capture the long-term audio context and learns the parameters from the fine-grained 3D facial vertices, resulting in accurate and smooth lip-synchronization performance. Afterward, the well-designed TAFT module, empowered by Morphology Augmented Face Blending (MAFB), takes the predicted expression parameters and target video as inputs to modify the facial region of the target video without distorting the background content. The TAFT effectively exploits the identity appearance and background context in the target video, which makes it possible to generalize to different speakers without retraining. Both quantitative and qualitative experiments confirm the superiority of our method in terms of realism, lip-synchronization, and visual quality. See the project page for code, data, and request pre-trained models: https://zhanghm1995.github.io/GSmoothFace.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Perceptually Optimized and Self-Calibrated Tone Mapping Operator. 一个感知优化和自校准的音调映射算子。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-02 DOI: 10.1109/TVCG.2025.3566377
Peibei Cao, Chenyang Le, Yuming Fang, Kede Ma
{"title":"A Perceptually Optimized and Self-Calibrated Tone Mapping Operator.","authors":"Peibei Cao, Chenyang Le, Yuming Fang, Kede Ma","doi":"10.1109/TVCG.2025.3566377","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3566377","url":null,"abstract":"<p><p>With the increasing popularity and accessibility of high dynamic range (HDR) photography, tone mapping operators (TMOs) for dynamic range compression are practically demanding. In this paper, we develop a two-stage neural network-based TMO that is self-calibrated and perceptually optimized. In Stage one, motivated by the physiology of the early stages of the human visual system, we first decompose an HDR image into a normalized Laplacian pyramid. We then use two lightweight deep neural networks, taking the normalized representation as input and estimating the Laplacian pyramid of the corresponding LDR image. We optimize the tone mapping network by minimizing the normalized Laplacian pyramid distance, a perceptual metric aligning with human judgments of tone-mapped image quality. In Stage two, we input the same HDR image-self-calibrated to different maximum luminance levels-into the learned tone mapping network, and generate a pseudo-multi-exposure image stack with varying detail visibility and color saturation. We then train another fusion network to merge the LDR image stack into a desired LDR image by maximizing a variant of the structural similarity index for multi-exposure image fusion, proven perceptually relevant to fused image quality. Extensive experiments show that our method produces images with consistently better visual quality while ranking among the fastest local TMOs.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sketch2Seq: Reconstruct CAD models from Feature-based Sketch Segmentation. Sketch2Seq:从基于特征的草图分割中重建CAD模型。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-02 DOI: 10.1109/TVCG.2025.3566544
Yue Sun, Jituo Li, Ziqin Xu, Jialu Zhang, Xinqi Liu, Dongliang Zhang, Guodong Lu
{"title":"Sketch2Seq: Reconstruct CAD models from Feature-based Sketch Segmentation.","authors":"Yue Sun, Jituo Li, Ziqin Xu, Jialu Zhang, Xinqi Liu, Dongliang Zhang, Guodong Lu","doi":"10.1109/TVCG.2025.3566544","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3566544","url":null,"abstract":"<p><p>Sketch-based modeling studies reconstructing models from sketches automatically, allowing users visualize design concepts rapidly. Generating CAD models based on user sketches helps reduce the learning curve for novice users, which promotes the everyday use of CAD software, and expands its reach to non-professional groups. While various algorithms study automatically generating models from single sketch or line drawing, they often produce non-editable models or editable models limited to simple extrusion operations. To improve this issue, we propose a novel sketch-based modeling system, Sketch2Seq, which generates complex, semantic, and editable CAD models. Our system eliminates the need for additional annotations from users and produces models that support subsequent application in commercial software. The core of our method lies in understanding users' design intent from CAD sketches. We design a novel sketch segmentation network for identifying diverse operation features in CAD sketches, which utilizes geometric features of strokes and different levels of topological connections. Additionally, to tackle the segmentation task, a dataset for CAD sketch segmentation is introduced. Comparative experiments and ablation evaluations prove the effectiveness of the proposed method. Based on segmentation result, coarse CAD sequences are generated and progressively executed. Meanwhile, the orders and parameters of the CAD sequences are optimized with context models and input sketches. All algorithms are integrated into a user interface. Experiments and evaluations validate the feasibility and superiority of our entire system which is able to reconstruct more complex features and achieve better results for longer sequence.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144061116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信