IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
XROps: a Visual Workflow Management System for Dynamic Immersive Analytics.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-27 DOI: 10.1109/TVCG.2025.3546467
Suemin Jeon, JunYoung Choi, Haejin Jeong, Won-Ki Jeong
{"title":"XROps: a Visual Workflow Management System for Dynamic Immersive Analytics.","authors":"Suemin Jeon, JunYoung Choi, Haejin Jeong, Won-Ki Jeong","doi":"10.1109/TVCG.2025.3546467","DOIUrl":"10.1109/TVCG.2025.3546467","url":null,"abstract":"<p><p>Immersive analytics is gaining attention across multiple domains due to its capability to facilitate intuitive data analysis in expansive environments through user interaction with data. However, creating immersive analytics systems for specific tasks is challenging due to the need for programming expertise and significant development effort. Despite the introduction of various immersive visualization authoring toolkits, domain experts still face hurdles in adopting immersive analytics into their workflow, particularly when faced with dynamically changing tasks and data in real time. To lower such technical barriers, we introduce XROps, a web-based authoring system that allows users to create immersive analytics applications through interactive visual programming, without the need for low-level scripting or coding. XROps enables dynamic immersive analytics authoring by allowing users to modify each step of the data visualization process with immediate feedback, enabling them to build visualizations on-the-fly and adapt to changing environments. It also supports the integration and visualization of real-time sensor data from XR devices-a key feature of immersive analytics-facilitating the creation of various analysis scenarios. We evaluated the usability of XROps through a user study and demonstrate its efficacy and usefulness in several example scenarios. We have released a web platform (https://vience.io/xrops) to demonstrate various examples to supplement our findings.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MC-NeRF: Multi-Camera Neural Radiance Fields for Multi-Camera Image Acquisition Systems.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-26 DOI: 10.1109/TVCG.2025.3546290
Yu Gao, Lutong Su, Hao Liang, Yufeng Yue, Yi Yang, Mengyin Fu
{"title":"MC-NeRF: Multi-Camera Neural Radiance Fields for Multi-Camera Image Acquisition Systems.","authors":"Yu Gao, Lutong Su, Hao Liang, Yufeng Yue, Yi Yang, Mengyin Fu","doi":"10.1109/TVCG.2025.3546290","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3546290","url":null,"abstract":"<p><p>Neural Radiance Fields (NeRF) use multi-view images for 3D scene representation, demonstrating remarkable performance. As one of the primary sources of multi-view images, multi-camera systems encounter challenges such as varying intrinsic parameters and frequent pose changes. Most previous NeRF-based methods assume a unique camera and rarely consider multi-camera scenarios. Besides, some NeRF methods that can optimize intrinsic and extrinsic parameters still remain susceptible to suboptimal solutions when these parameters are poor initialized. In this paper, we propose MC-NeRF, a method for joint optimization of both intrinsic and extrinsic parameters alongside NeRF, allowing individual camera parameters for each image. First, we analyze the coupling issue that arises from the joint optimization between intrinsics and extrinsics, and propose a decoupling constraint utilizing auxiliary images. To further address the degenerate cases in the decoupling process, we introduce an efficient auxiliary image acquisition scheme to mitigate these effects. Furthermore, recognizing that most existing datasets are designed for a unique camera, we provided a new dataset that includes both simulated data and real-world data. Experiments demonstrate the effectiveness of our method in scenarios where each image corresponds to different camera parameters. Specifically, our approach outperforms the baselines favorably in terms of intrinsics estimation, extrinsics estimation, scale estimation, and rendering quality. The Code and supplementary materials are available at https://in2-viaun.github.io/MC-NeRF.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interweaving Mathematics and Art: Drawing Graphs as Celtic Knots and Links with CelticGraph.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-25 DOI: 10.1109/TVCG.2025.3545481
Niklas Grone, Peter Eades, Karsten Klein, Patrick Eades, Leo Schreiber, Ulf Hailer, Hugo A D do Nascimento, Falk Schreiber
{"title":"Interweaving Mathematics and Art: Drawing Graphs as Celtic Knots and Links with CelticGraph.","authors":"Niklas Grone, Peter Eades, Karsten Klein, Patrick Eades, Leo Schreiber, Ulf Hailer, Hugo A D do Nascimento, Falk Schreiber","doi":"10.1109/TVCG.2025.3545481","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3545481","url":null,"abstract":"<p><p>Celtic knots, an ancient art form often linked to Celtic heritage, have been used historically in the decoration of monuments and manuscripts, often symbolizing the notions of eternity and interconnectedness. This paper introduces the framework CelticGraph designed for illustrating graphs in the style of Celtic knots and links. The process of creating these drawings raises interesting combinatorial concepts in the theory of circuits in planar graphs. Further, CelticGraph uses a novel algorithm to represent edges as Bézier curves, aiming to show each link as a smooth curve with limited curvature. We also show that with our production mechanisms we can compute any 4-regular plane graph and thereby any celtic knot or link. The CelticGraph framework for drawing graphs as celtic knots and links is implemented as an add-on of Vanted, a network visualization and analysis tool.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PDPilot: Exploring Partial Dependence Plots Through Ranking, Filtering, and Clustering.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-25 DOI: 10.1109/TVCG.2025.3545025
Daniel Kerrigan, Brian Barr, Enrico Bertini
{"title":"PDPilot: Exploring Partial Dependence Plots Through Ranking, Filtering, and Clustering.","authors":"Daniel Kerrigan, Brian Barr, Enrico Bertini","doi":"10.1109/TVCG.2025.3545025","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3545025","url":null,"abstract":"<p><p>Partial dependence plots (PDPs) and individual conditional expectation (ICE) plots are visualizations used for explaining the behavior of machine learning (ML) models trained on tabular datasets. They show how the values of a feature or pair of features impact a model's predictions. However, in models with a large number of features, it is impractical for an ML practitioner to analyze all possible plots. To address this, we present new techniques for ranking and filtering PDP and ICE plots and build upon existing strategies for clustering the lines in ICE plots. Together, these techniques aim to help ML practitioners efficiently explore PDP and ICE plots and identify interesting model behavior. We integrate these techniques into PDPilot, a visual analytics tool that runs in Jupyter notebooks. We use PDPilot to study how 7 ML practitioners utilize the ranking, filtering, and clustering techniques to analyze an ML model.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Every Angle Is Worth A Second Glance: Mining Kinematic Skeletal Structures from Multi-view Joint Cloud.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-24 DOI: 10.1109/TVCG.2025.3542442
Junkun Jiang, Jie Chen, Ho Yin Au, Mingyuan Chen, Wei Xue, Yike Guo
{"title":"Every Angle Is Worth A Second Glance: Mining Kinematic Skeletal Structures from Multi-view Joint Cloud.","authors":"Junkun Jiang, Jie Chen, Ho Yin Au, Mingyuan Chen, Wei Xue, Yike Guo","doi":"10.1109/TVCG.2025.3542442","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3542442","url":null,"abstract":"<p><p>Multi-person motion capture over sparse angular observations is a challenging problem under interference from both self- and mutual-occlusions. Existing works produce accurate 2D joint detection, however, when these are triangulated and lifted into 3D, available solutions all struggle in selecting the most accurate candidates and associating them to the correct joint type and target identity. As such, in order to fully utilize all accurate 2D joint location information, we propose to independently triangulate between all same-typed 2D joints from all camera views regardless of their target ID, forming the Joint Cloud. Joint Cloud consist of both valid joints lifted from the same joint type and target ID, as well as falsely constructed ones that are from different 2D sources. These redundant and inaccurate candidates are processed over the proposed Joint Cloud Selection and Aggregation Transformer (JCSAT) involving three cascaded encoders which deeply explore the trajectile, skeletal structural, and view-dependent correlations among all 3D point candidates in the cross-embedding space. An Optimal Token Attention Path (OTAP) module is proposed which subsequently selects and aggregates informative features from these redundant observations for the final prediction of human motion. To demonstrate the effectiveness of JCSAT, we build and publish a new multi-person motion capture dataset BUMocap-X with complex interactions and severe occlusions. Comprehensive experiments over the newly presented as well as benchmark datasets validate the effectiveness of the proposed framework, which outperforms all existing state-of-the-art methods, especially under challenging occlusion scenarios.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing Causality in Mixed Reality for Manual Task Learning: A Study.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-24 DOI: 10.1109/TVCG.2025.3542949
Rahul Jain, Jingyu Shi, Andrew Benton, Moiz Rasheed, Hyungjun Doh, Subramanian Chidambaram, Karthik Ramani
{"title":"Visualizing Causality in Mixed Reality for Manual Task Learning: A Study.","authors":"Rahul Jain, Jingyu Shi, Andrew Benton, Moiz Rasheed, Hyungjun Doh, Subramanian Chidambaram, Karthik Ramani","doi":"10.1109/TVCG.2025.3542949","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3542949","url":null,"abstract":"<p><p>Mixed Reality (MR) is gaining prominence in manual task skill learning due to its in-situ, embodied, and immersive experience. To teach manual tasks, current methodologies break the task into hierarchies (tasks into subtasks) and visualize not only the current subtasks but also the future ones that are causally related. We investigate the impact of visualizing causality within an MR framework on manual task skill learning. We conducted a user study with 48 participants, experimenting with how presenting tasks in hierarchical causality levels (no causality, event-level, interaction-level, and gesture-level causality) affects user comprehension and performance in a complex assembly task. The research finds that displaying all causality levels enhances user understanding and task execution, with a compromise of learning time. Based on the results, we further provide design recommendations and in-depth discussions for future manual task learning systems.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comment Analyzer: A Tool for Analyzing Comment Sets and Thread Structures of News Articles.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-24 DOI: 10.1109/TVCG.2025.3544733
Dora Kiesel, Patrick Riehmann, Ines Engelmann, Hanna Ramezani, Bernd Froehlich
{"title":"Comment Analyzer: A Tool for Analyzing Comment Sets and Thread Structures of News Articles.","authors":"Dora Kiesel, Patrick Riehmann, Ines Engelmann, Hanna Ramezani, Bernd Froehlich","doi":"10.1109/TVCG.2025.3544733","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544733","url":null,"abstract":"<p><p>The lack of visually guided data exploration tools limits the scope of research questions communication scientists are able to study. The Comment Analyzer steps in where traditional statistical tools fail when it comes to researching the commenting behavior of news article readers. The basis of such an analysis are comment-thread corpora in which comments are tagged with various deliberative quality indicators as well as political stance. Our analysis tool provides a visual querying system for the exploration and analysis of such corpora and allows social scientists to gain insights into the distributions and relations between comment attributes, the homogeneity of thread sets, frequent thread structures and changes in comment qualities over the course of a single but in particular of multiple threads at once. We developed the tool in close collaboration with communication scientists in a user-centered approach. The system has proven its utility in thorough reviews with the communication scientists, by corroborating existing findings in the literature but particularly by provoking and answering new research questions. Final reviews with five independent experts confirmed these observations and revealed the potential of the Comment Analyzer for other datasets currently being created and analyzed in the communication sciences.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PolyGraph: a Graph-based Method for Floorplan Reconstruction from 3D Scans.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-24 DOI: 10.1109/TVCG.2025.3544769
Qian Sun, Chenrong Fang, Shuang Liu, Yidan Sun, Yu Shang, Ying He
{"title":"PolyGraph: a Graph-based Method for Floorplan Reconstruction from 3D Scans.","authors":"Qian Sun, Chenrong Fang, Shuang Liu, Yidan Sun, Yu Shang, Ying He","doi":"10.1109/TVCG.2025.3544769","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544769","url":null,"abstract":"<p><p>The task of reconstructing indoor floorplans has become an increasingly popular subject, offering substantial benefits across various applications such as interior design, virtual reality, and robotics. Despite the growing interest, existing approaches frequently encounter challenges due to high computational costs and sensitivity to errors in primitive detection. In this paper, we introduce PolyGraph, a new computational framework that combines a deep-learning based primitive detection network with an optimization-based reconstruction algorithm to facilitate high-quality reconstruction results. Specifically, we develop a novel guided wall point primitive estimation network capable of generating dense samples along wall boundaries. This network not only retains structural detail but also shows improved robustness in the detection phase. Then, PolyGraph utilizes wall points to establish a graph-based representation, formulating indoor floorplan reconstruction as a subgraph optimization problem. This approach significantly reduces the search space comparing to existing pixel-level optimization approaches. By utilizing \"structural weight\", we seamlessly integrate the structural information of walls and rooms into graph representations, ensuring high-quality reconstruction results. Experimental results demonstrate PolyGraph's effectiveness and its advantages compared to other optimization-based approaches, showcasing its computational efficiency, and its ability to preserve structural integrity and capture fine details, as quantified by the structure metrics. The source code is publicly available at https://github.com/Fern327/PolyGraph.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time, Free-Viewpoint Holographic Patient Rendering for Telerehabilitation via a Single Camera: A Data-driven Approach with 3D Gaussian Splatting for Real-World Adaptation.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-21 DOI: 10.1109/TVCG.2025.3544297
Shengting Cao, Jiamiao Zhao, Fei Hu, Yu Gan
{"title":"Real-Time, Free-Viewpoint Holographic Patient Rendering for Telerehabilitation via a Single Camera: A Data-driven Approach with 3D Gaussian Splatting for Real-World Adaptation.","authors":"Shengting Cao, Jiamiao Zhao, Fei Hu, Yu Gan","doi":"10.1109/TVCG.2025.3544297","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544297","url":null,"abstract":"<p><p>Telerehabilitation is a cost-effective alternative to in-clinic rehabilitation. Although convenient, it lacks immersive and free-viewpoint patient visualization. Current research explores two solutions to this issue. Mesh-based methods use 3D models and motion capture for AR visualization. However, they are labor-intensive and less photorealistic than 2D images. Microsoft's Holoportation generates photorealistic 3D models with eight RGBD cameras in real time. However, it requires complex setups, high GPU power, and high-speed communication infrastructure, making deployment challenging. This paper presents a Real-Time Free-Viewpoint Holographic Patient Rendering (RT-FVHP) system for telerehabilitation. Unlike traditional methods that require manually crafted assets such as 3D meshes, texture maps, and skeletal rigging, our data-driven approach eliminates the need for explicit asset definitions. Inspired by the HumanNeRF framework, we retarget dynamic human poses to a canonical pose and leverage 3D Gaussian Splatting to train a neural network in canonical space for patient representation. The trained model generates 2D RGB outputs via Gaussian Splatting rasterization, guided by camera parameters and human pose inputs. Compatible with HoloLens 2 and web-based platforms, RT-FVHP operates effectively under real-world conditions, including handling occlusions caused by treadmills. Occlusion handling is accomplished using our Shape-Enforced Gaussian Density Control (SGDC), which initializes and densifies 3D Gaussians in occluded regions using estimated SMPL human body priors. This approach minimizes manual intervention while ensuring complete body reconstruction. With efficient Gaussian rasterization, the model delivers real-time performance of up to 400 FPS at 1080p resolution on a dedicated RTX6000 GPU.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EverywhereAR: A Visual Authoring System for Creating Adaptive AR Game Scenes.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-20 DOI: 10.1109/TVCG.2025.3544021
Jia Liu, Renjie Zhang, Isidro Butaslac, Taishi Sawabe, Yuichiro Fujimoto, Masayuki Kanbara, Hirokazu Kato
{"title":"EverywhereAR: A Visual Authoring System for Creating Adaptive AR Game Scenes.","authors":"Jia Liu, Renjie Zhang, Isidro Butaslac, Taishi Sawabe, Yuichiro Fujimoto, Masayuki Kanbara, Hirokazu Kato","doi":"10.1109/TVCG.2025.3544021","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544021","url":null,"abstract":"<p><p>As a pivotal application of Augmented Reality (AR) technology, AR games empower players to bridge reality with virtuality, offering a distinct and immersive experience set apart from traditional games. However, when creating AR games, one of the most formidable challenges faced by designers pertains to the unpredictability of intricate real-world environments, which hinders crafting naturally integrated scenes where virtual objects harmoniously blend with the players' surroundings. In this paper, we introduce EverywhereAR, a system that is capable of flexibly realizing the designer's idea in various real-world scenes. It provides a designer-friendly Game Scene Template development interface, for designers to quickly graphify their inspirations. To achieve the best AR game scene, this work proposes a highly customizable integration method. According to the integrated AR scene graph, the system will arrange each virtual object in a reasonable position to make the generated game scene look natural. We conducted an experiment to evaluate our system's performance across various game scene templates and real-world environments. Results from the experiment indicated that our system was able to generate AR game scenes matching the quality of scenes manually created by professional designers. In addition, we conducted another experiment to assess the effectiveness and usability of the proposed interface. The experiment results showed that the interface was intuitive and efficient, allowing users to create a simple game scene within one minute.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信