IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Field of View Restriction and Snap Turning as Cybersickness Mitigation Tools. 视场限制和急转弯作为缓解晕机的工具。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-27 DOI: 10.1109/TVCG.2024.3470214
Jonathan W Kelly, Taylor A Doty, Stephen B Gilbert, Michael C Dorneich
{"title":"Field of View Restriction and Snap Turning as Cybersickness Mitigation Tools.","authors":"Jonathan W Kelly, Taylor A Doty, Stephen B Gilbert, Michael C Dorneich","doi":"10.1109/TVCG.2024.3470214","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3470214","url":null,"abstract":"<p><p>Multiple tools are available to reduce cybersickness (sickness caused by virtual reality), but past research has not investigated the combined effects of multiple mitigation tools. Field of view (FOV) restriction limits peripheral vision during self-motion, and ample evidence supports its effectiveness for reducing cybersickness. Snap turning involves discrete rotations of the user's perspective without presenting intermediate views, although reports on its effectiveness at reducing cybersickness are limited and equivocal. Both mitigation tools reduce the visual motion that can cause cybersickness. The current study (N = 201) investigated the individual and combined effects of FOV restriction and snap turning on cybersickness when playing a consumer virtual reality game. FOV restriction and snap turning in isolation reduced cybersickness compared to a control condition without mitigation tools. Yet, the combination of FOV restriction and snap turning did not further reduce cybersickness beyond the individual tools in isolation, and in some cases the combination of tools led to cybersickness similar to that in the no mitigation control. These results indicate that caution is warranted when combining multiple cybersickness mitigation tools, which can interact in unexpected ways.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Simulation-based Approach for Quantifying the Impact of Interactive Label Correction for Machine Learning. 基于仿真的方法,量化交互式标签校正对机器学习的影响。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-26 DOI: 10.1109/TVCG.2024.3468352
Yixuan Wang, Jieqiong Zhao, Jiayi Hong, Ronald G Askin, Ross Maciejewski
{"title":"A Simulation-based Approach for Quantifying the Impact of Interactive Label Correction for Machine Learning.","authors":"Yixuan Wang, Jieqiong Zhao, Jiayi Hong, Ronald G Askin, Ross Maciejewski","doi":"10.1109/TVCG.2024.3468352","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3468352","url":null,"abstract":"<p><p>Recent years have witnessed growing interest in understanding the sensitivity of machine learning to training data characteristics. While researchers have claimed the benefits of activities such as a human-in-the-loop approach of interactive label correction for improving model performance, there have been limited studies to quantitatively probe the relationship between the cost of label correction and the associated benefit in model performance. We employ a simulation-based approach to explore the efficacy of label correction under diverse task conditions, namely different datasets, noise properties, and machine learning algorithms. We measure the impact of label correction on model performance under the best-case scenario assumption: perfect correction (perfect human and visual systems), serving as an upper-bound estimation of the benefits derived from visual interactive label correction. The simulation results reveal a trade-off between the label correction effort expended and model performance improvement. Notably, task conditions play a crucial role in shaping the trade-off. Based on the simulation results, we develop a set of recommendations to help practitioners determine conditions under which interactive label correction is an effective mechanism for improving model performance.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comprehensive Evaluation of Arbitrary Image Style Transfer Methods. 全面评估任意图像风格转换方法
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-25 DOI: 10.1109/TVCG.2024.3466964
Zijun Zhou, Fan Tang, Yuxin Zhang, Oliver Deussen, Juan Cao, Weiming Dong, Xiangtao Li, Tong-Yee Lee
{"title":"A Comprehensive Evaluation of Arbitrary Image Style Transfer Methods.","authors":"Zijun Zhou, Fan Tang, Yuxin Zhang, Oliver Deussen, Juan Cao, Weiming Dong, Xiangtao Li, Tong-Yee Lee","doi":"10.1109/TVCG.2024.3466964","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3466964","url":null,"abstract":"<p><p>Despite the remarkable process in the field of arbitrary image style transfer (AST), inconsistent evaluation continues to plague style transfer research. Existing methods often suffer from limited objective evaluation and inconsistent subjective feedback, hindering reliable comparisons among AST variants. In this study, we propose a multi-granularity assessment system that combines standardized objective and subjective evaluations. We collect a fine-grained dataset considering a range of image contexts such as different scenes, object complexities, and rich parsing information from multiple sources. Objective and subjective studies are conducted using the collected dataset. Specifically, we innovate on traditional subjective studies by developing an online evaluation system utilizing a combination of point-wise, pair-wise, and group-wise questionnaires. Finally, we bridge the gap between objective and subjective evaluations by examining the consistency between the results from the two studies. We experimentally evaluate CNN-based, flow-based, transformer-based, and diffusion-based AST methods by the proposed multi-granularity assessment system, which lays the foundation for a reliable and robust evaluation. Providing standardized measures, objective data, and detailed subjective feedback empowers researchers to make informed comparisons and drive innovation in this rapidly evolving field. Finally, for the collected dataset and our online evaluation system, please see http://ivc.ia.ac.cn.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets PhenoFlow:用于探索大型复杂卒中数据集的人工-LLM驱动可视分析系统。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-24 DOI: 10.1109/TVCG.2024.3456215
Jaeyoung Kim;Sihyeon Lee;Hyeon Jeon;Keon-Joo Lee;Hee-Joon Bae;Bohyoung Kim;Jinwook Seo
{"title":"PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets","authors":"Jaeyoung Kim;Sihyeon Lee;Hyeon Jeon;Keon-Joo Lee;Hee-Joon Bae;Bohyoung Kim;Jinwook Seo","doi":"10.1109/TVCG.2024.3456215","DOIUrl":"10.1109/TVCG.2024.3456215","url":null,"abstract":"Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"470-480"},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-Based Synthetic Lethal Prediction SLInterpreter:基于 GNN 的合成致命性预测的探索性和迭代式人机协作系统。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-24 DOI: 10.1109/TVCG.2024.3456325
Haoran Jiang;Shaohan Shi;Shuhao Zhang;Jie Zheng;Quan Li
{"title":"SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-Based Synthetic Lethal Prediction","authors":"Haoran Jiang;Shaohan Shi;Shuhao Zhang;Jie Zheng;Quan Li","doi":"10.1109/TVCG.2024.3456325","DOIUrl":"10.1109/TVCG.2024.3456325","url":null,"abstract":"Synthetic Lethal (SL) relationships, though rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there is still a significant need among domain experts for interpretive paths and mechanism explorations that align better with domain-specific knowledge, particularly due to the high costs of experimentation. To address this gap, we propose an iterative Human-AI collaborative framework with two key components: 1) Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids experts in organizing and comparing predictions and interpretive paths across different granularities, uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, enhancing expert involvement and intervention to build trust. Facilitated by SLInterpreter, this framework ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. We evaluate the framework's efficacy through a case study and expert interviews.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"919-929"},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks 循环:利用出处和可视化支持笔记本中的探索性数据分析。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456186
Klaus Eckelt;Kiran Gadhave;Alexander Lex;Marc Streit
{"title":"Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks","authors":"Klaus Eckelt;Kiran Gadhave;Alexander Lex;Marc Streit","doi":"10.1109/TVCG.2024.3456186","DOIUrl":"10.1109/TVCG.2024.3456186","url":null,"abstract":"Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks, leading to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook provenance, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only makes the analysis process transparent but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate our approach's utility and potential impact in two use cases and feedback from notebook users from various backgrounds. This paper and all supplemental materials are available at https://osf.io/79eyn.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"1213-1223"},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10689475","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions StuGPTViz:了解学生聊天互动的可视化分析方法。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456363
Zixin Chen;Jiachen Wang;Meng Xia;Kento Shigyo;Dingdong Liu;Rong Zhang;Huamin Qu
{"title":"StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions","authors":"Zixin Chen;Jiachen Wang;Meng Xia;Kento Shigyo;Dingdong Liu;Rong Zhang;Huamin Qu","doi":"10.1109/TVCG.2024.3456363","DOIUrl":"10.1109/TVCG.2024.3456363","url":null,"abstract":"The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"908-918"},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BEMTrace: Visualization-Driven Approach for Deriving Building Energy Models from BIM BEMTrace:从 BIM 导出建筑能源模型的可视化驱动方法。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456315
Andreas Walch;Attila Szabo;Harald Steinlechner;Thomas Ortner;Eduard Gröller;Johanna Schmidt
{"title":"BEMTrace: Visualization-Driven Approach for Deriving Building Energy Models from BIM","authors":"Andreas Walch;Attila Szabo;Harald Steinlechner;Thomas Ortner;Eduard Gröller;Johanna Schmidt","doi":"10.1109/TVCG.2024.3456315","DOIUrl":"10.1109/TVCG.2024.3456315","url":null,"abstract":"Building Information Modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, Building Energy Modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building's energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identification, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and to show that the BEMTrace workflow helps users understand complex 3D data wrangling processes.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"240-250"},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic DimBridge:用谓词逻辑交互式解释降维过程中的视觉模式
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456391
Brian Montambault;Gabriel Appleby;Jen Rogers;Camelia D. Brumar;Mingwei Li;Remco Chang
{"title":"DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic","authors":"Brian Montambault;Gabriel Appleby;Jen Rogers;Camelia D. Brumar;Mingwei Li;Remco Chang","doi":"10.1109/TVCG.2024.3456391","DOIUrl":"10.1109/TVCG.2024.3456391","url":null,"abstract":"Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns of dimension reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper, we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"207-217"},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GraspDiff: Grasping Generation for Hand-Object Interaction With Multimodal Guided Diffusion. GraspDiff:利用多模态引导扩散生成手与物体交互的抓取效果
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3466190
Binghui Zuo, Zimeng Zhao, Wenqian Sun, Xiaohan Yuan, Zhipeng Yu, Yangang Wang
{"title":"GraspDiff: Grasping Generation for Hand-Object Interaction With Multimodal Guided Diffusion.","authors":"Binghui Zuo, Zimeng Zhao, Wenqian Sun, Xiaohan Yuan, Zhipeng Yu, Yangang Wang","doi":"10.1109/TVCG.2024.3466190","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3466190","url":null,"abstract":"<p><p>Grasping generation holds significant importance in both robotics and AI-generated content. While pure network paradigms based on VAEs or GANs ensure diversity in outcomes, they often fall short of achieving plausibility. Additionally, although those two-step paradigms that first predict contact and then optimize distance yield plausible results, they are always known to be time-consuming. This paper introduces a novel paradigm powered by DDPM, accommodating diverse modalities with varying interaction granularities as its generating conditions, including 3D object, contact affordance, and image content. Our key idea is that the iterative steps inherent to diffusion models can supplant the iterative optimization routines in existing optimization methods, thereby endowing the generated results from our method with both diversity and plausibility. Using the same training data, our paradigm achieves superior generation performance and competitive generation speed compared to optimization-based paradigms. Extensive experiments on both in-domain and out-of-domain objects demonstrate that our method receives significant improvement over the SOTA method. We will release the code for research purposes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信