IEEE Transactions on Visualization and Computer Graphics最新文献

筛选
英文 中文
NeRF-Art: Text-Driven Neural Radiance Fields Stylization NeRF-Art:文本驱动的神经辐射领域的风格化
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-12-15 DOI: 10.48550/arXiv.2212.08070
Can Wang, Ruixia Jiang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao
{"title":"NeRF-Art: Text-Driven Neural Radiance Fields Stylization","authors":"Can Wang, Ruixia Jiang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao","doi":"10.48550/arXiv.2212.08070","DOIUrl":"https://doi.org/10.48550/arXiv.2212.08070","url":null,"abstract":"As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains challenging, especially in simulating a text-guided style with both the appearance and the geometry altered simultaneously. In this paper, we present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt. Unlike previous approaches that either lack sufficient geometry deformations and texture details or require meshes to guide the stylization, our method can shift a 3D scene to the target style characterized by desired geometry and appearance variations without any mesh guidance. This is achieved by introducing a novel global-local contrastive learning strategy, combined with the directional constraint to simultaneously control both the trajectory and the strength of the target style. Moreover, we adopt a weight regularization method to effectively suppress cloudy artifacts and geometry noises which arise easily when the density field is transformed during geometry stylization. Through extensive experiments on various styles, we demonstrate that our method is effective and robust regarding both single-view stylization quality and cross-view consistency. The code and more results can be found on our project page: https://cassiepython.github.io/nerfart/.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44448914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
What's the Situation with Intelligent Mesh Generation: A Survey and Perspectives 智能网格生成的现状与展望
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-11-11 DOI: 10.48550/arXiv.2211.06009
Zezeng Li, Zebin Xu, Ying Li, X. Gu, Na Lei
{"title":"What's the Situation with Intelligent Mesh Generation: A Survey and Perspectives","authors":"Zezeng Li, Zebin Xu, Ying Li, X. Gu, Na Lei","doi":"10.48550/arXiv.2211.06009","DOIUrl":"https://doi.org/10.48550/arXiv.2211.06009","url":null,"abstract":"Intelligent Mesh Generation (IMG) represents a novel and promising field of research, utilizing machine learning techniques to generate meshes. Despite its relative infancy, IMG has significantly broadened the adaptability and practicality of mesh generation techniques, delivering numerous breakthroughs and unveiling potential future pathways. However, a noticeable void exists in the contemporary literature concerning comprehensive surveys of IMG methods. This paper endeavors to fill this gap by providing a systematic and thorough survey of the current IMG landscape. With a focus on 113 preliminary IMG methods, we undertake a meticulous analysis from various angles, encompassing core algorithm techniques and their application scope, agent learning objectives, data types, targeted challenges, as well as advantages and limitations. We have curated and categorized the literature, proposing three unique taxonomies based on key techniques, output mesh unit elements, and relevant input data types. This paper also underscores several promising future research directions and challenges in IMG. To augment reader accessibility, a dedicated IMG project page is available at https://github.com/xzb030/IMG_Survey.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41797284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
GPA-Net: No-Reference Point Cloud Quality Assessment with Multi-task Graph Convolutional Network GPA-Net:基于多任务图卷积网络的无参考点云质量评估
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-10-29 DOI: 10.48550/arXiv.2210.16478
Ziyu Shan, Qi Yang, Rui Ye, Yujie Zhang, Yi Xu, Xiaozhong Xu, Shan Liu
{"title":"GPA-Net: No-Reference Point Cloud Quality Assessment with Multi-task Graph Convolutional Network","authors":"Ziyu Shan, Qi Yang, Rui Ye, Yujie Zhang, Yi Xu, Xiaozhong Xu, Shan Liu","doi":"10.48550/arXiv.2210.16478","DOIUrl":"https://doi.org/10.48550/arXiv.2210.16478","url":null,"abstract":"With the rapid development of 3D vision, point cloud has become an increasingly popular 3D visual media content. Due to the irregular structure, point cloud has posed novel challenges to the related research, such as compression, transmission, rendering and quality assessment. In these latest researches, point cloud quality assessment (PCQA) has attracted wide attention due to its significant role in guiding practical applications, especially in many cases where the reference point cloud is unavailable. However, current no-reference metrics which based on prevalent deep neural network have apparent disadvantages. For example, to adapt to the irregular structure of point cloud, they require preprocessing such as voxelization and projection that introduce extra distortions, and the applied grid-kernel networks, such as Convolutional Neural Networks, fail to extract effective distortion-related features. Besides, they rarely consider the various distortion patterns and the philosophy that PCQA should exhibit shift, scaling, and rotation invariance. In this paper, we propose a novel no-reference PCQA metric named the Graph convolutional PCQA network (GPA-Net). To extract effective features for PCQA, we propose a new graph convolution kernel, i.e., GPAConv, which attentively captures the perturbation of structure and texture. Then, we propose the multi-task framework consisting of one main task (quality regression) and two auxiliary tasks (distortion type and degree predictions). Finally, we propose a coordinate normalization module to stabilize the results of GPAConv under shift, scale and rotation transformations. Experimental results on two independent databases show that GPA-Net achieves the best performance compared to the state-of-the-art no-reference PCQA metrics, even better than some full-reference metrics in some cases. The code is available at: https://github.com/Slowhander/GPA-Net.git.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42738762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Explore Contextual Information for 3D Scene Graph Generation 探索3D场景图形生成的上下文信息
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-10-12 DOI: 10.48550/arXiv.2210.06240
Yu-An Liu, Chengjiang Long, Zhaoxuan Zhang, Bo Liu, Qiang Zhang, Baocai Yin, Xin Yang
{"title":"Explore Contextual Information for 3D Scene Graph Generation","authors":"Yu-An Liu, Chengjiang Long, Zhaoxuan Zhang, Bo Liu, Qiang Zhang, Baocai Yin, Xin Yang","doi":"10.48550/arXiv.2210.06240","DOIUrl":"https://doi.org/10.48550/arXiv.2210.06240","url":null,"abstract":"3D scene graph generation (SGG) has been of high interest in computer vision. Although the accuracy of 3D SGG on coarse classification and single relation label has been gradually improved, the performance of existing works is still far from being perfect for fine-grained and multi-label situations. In this paper, we propose a framework fully exploring contextual information for the 3D SGG task, which attempts to satisfy the requirements of fine-grained entity class, multiple relation labels, and high accuracy simultaneously. Our proposed approach is composed of a Graph Feature Extraction module and a Graph Contextual Reasoning module, achieving appropriate information-redundancy feature extraction, structured organization, and hierarchical inferring. Our approach achieves superior or competitive performance over previous methods on the 3DSSG dataset, especially on the relationship prediction sub-task.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44168847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multi-User Redirected Walking in Separate Physical Spaces for Online VR Scenarios 在线VR场景中独立物理空间的多用户重定向行走
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-10-07 DOI: 10.48550/arXiv.2210.05356
Sen-Zhe Xu, Jia-Hong Liu, Miao Wang, Fang-Lue Zhang, Songhai Zhang
{"title":"Multi-User Redirected Walking in Separate Physical Spaces for Online VR Scenarios","authors":"Sen-Zhe Xu, Jia-Hong Liu, Miao Wang, Fang-Lue Zhang, Songhai Zhang","doi":"10.48550/arXiv.2210.05356","DOIUrl":"https://doi.org/10.48550/arXiv.2210.05356","url":null,"abstract":"With the recent rise of Metaverse, online multiplayer VR applications are becoming increasingly prevalent worldwide. However, as multiple users are located in different physical environments, different reset frequencies and timings can lead to serious fairness issues for online collaborative/competitive VR applications. For the fairness of online VR apps/games, an ideal online RDW strategy must make the locomotion opportunities of different users equal, regardless of different physical environment layouts. The existing RDW methods lack the scheme to coordinate multiple users in different PEs, and thus have the issue of triggering too many resets for all the users under the locomotion fairness constraint. We propose a novel multi-user RDW method that is able to significantly reduce the overall reset number and give users a better immersive experience by providing a fair exploration. Our key idea is to first find out the \"bottleneck\" user that may cause all users to be reset and estimate the time to reset given the users' next targets, and then redirect all the users to favorable poses during that maximized bottleneck time to ensure the subsequent resets can be postponed as much as possible. More particularly, we develop methods to estimate the time of possibly encountering obstacles and the reachable area for a specific pose to enable the prediction of the next reset caused by any user. Our experiments and user study found that our method outperforms existing RDW methods in online VR applications.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46927440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
TraInterSim: Adaptive and Planning-Aware Hybrid-Driven Traffic Intersection Simulation TraInterSim:自适应和规划感知混合驱动交通交叉口仿真
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-10-03 DOI: 10.48550/arXiv.2210.08118
Pei Lv, Xinming Pei, Xinyu Ren, Yuzhen Zhang, Chaochao Li, Mingliang Xu
{"title":"TraInterSim: Adaptive and Planning-Aware Hybrid-Driven Traffic Intersection Simulation","authors":"Pei Lv, Xinming Pei, Xinyu Ren, Yuzhen Zhang, Chaochao Li, Mingliang Xu","doi":"10.48550/arXiv.2210.08118","DOIUrl":"https://doi.org/10.48550/arXiv.2210.08118","url":null,"abstract":"Traffic intersections are important scenes that can be seen almost everywhere in the traffic system. Currently, most simulation methods perform well at highways and urban traffic networks. In intersection scenarios, the challenge lies in the lack of clearly defined lanes, where agents with various motion plannings converge in the central area from different directions. Traditional model-based methods are difficult to drive agents to move realistically at intersections without enough predefined lanes, while data-driven methods often require a large amount of high-quality input data. Simultaneously, tedious parameter tuning is inevitable involved to obtain the desired simulation results. In this paper, we present a novel adaptive and planning-aware hybrid-driven method (TraInterSim) to simulate traffic intersection scenarios. Our hybrid-driven method combines an optimization-based data-driven scheme with a velocity continuity model. It guides the agent's movements using real-world data and can generate those behaviors not present in the input data. Our optimization method fully considers velocity continuity, desired speed, direction guidance, and planning-aware collision avoidance. Agents can perceive others' motion plannings and relative distances to avoid possible collisions. To preserve the individual flexibility of different agents, the parameters in our method are automatically adjusted during the simulation. TraInterSim can generate realistic behaviors of heterogeneous agents in different traffic intersection scenarios in interactive rates. Through extensive experiments as well as user studies, we validate the effectiveness and rationality of the proposed simulation method.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47313608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RankFIRST: Visual Analysis for Factor Investment By Ranking Stock Timeseries. RankFIRST:通过排列股票时间序列进行因子投资的可视化分析。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-09-27 DOI: 10.1109/TVCG.2022.3209414
Huijie Guo, Meijun Liu, Bowen Yang, Ye Sun, Huamin Qu, Lei Shi
{"title":"RankFIRST: Visual Analysis for Factor Investment By Ranking Stock Timeseries.","authors":"Huijie Guo, Meijun Liu, Bowen Yang, Ye Sun, Huamin Qu, Lei Shi","doi":"10.1109/TVCG.2022.3209414","DOIUrl":"10.1109/TVCG.2022.3209414","url":null,"abstract":"<p><p>In the era of quantitative investment, factor-based investing models are widely adopted in the construction of stock portfolios. These models explain the performance of individual stocks by a set of financial factors, e.g., market beta and company size. In industry, open investment platforms allow the online building of factor-based models, yet set a high bar on the engineering expertise of end-users. State-of-the-art visualization systems integrate the whole factor investing pipeline, but do not directly address domain users' core requests on ranking factors and stocks for portfolio construction. The current model lacks explainability, which downgrades its credibility with stock investors. To fill the gap in modeling, ranking, and visualizing stock time series for factor investment, we designed and implemented a visual analytics system, namely RankFIRST. The system offers built-in support for an established factor collection and a cross-sectional regression model viable for human interpretation. A hierarchical slope graph design is introduced according to the desired characteristics of good factors for stock investment. A novel firework chart is also invented extending the well-known candlestick chart for stock time series. We evaluated the system on the full-scale Chinese stock market data in the recent 30 years. Case studies and controlled user evaluation demonstrate the superiority of our system on factor investing, in comparison to both passive investing on stock indices and existing stock market visual analytics tools.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"PP ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9509274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Collaborative, Interactive and Context-Aware Drawing Agent for Co-Creative Design 一种协作、交互和上下文感知的协同设计绘图代理
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-09-26 DOI: 10.48550/arXiv.2209.12588
F. Ibarrola, Tomas Lawton, Kazjon Grace
{"title":"A Collaborative, Interactive and Context-Aware Drawing Agent for Co-Creative Design","authors":"F. Ibarrola, Tomas Lawton, Kazjon Grace","doi":"10.48550/arXiv.2209.12588","DOIUrl":"https://doi.org/10.48550/arXiv.2209.12588","url":null,"abstract":"Recent advances in text-conditioned generative models have provided us with neural networks capable of creating images of astonishing quality, be they realistic, abstract, or even creative. These models have in common that (more or less explicitly) they all aim to produce a high-quality one-off output given certain conditions, and in that they are not well suited for a creative collaboration framework. Drawing on theories from cognitive science that model how professional designers and artists think, we argue how this setting differs from the former and introduce CICADA: a Collaborative, Interactive Context-Aware Drawing Agent. CICADA uses a vector-based synthesis-by-optimisation method to take a partial sketch (such as might be provided by a user) and develop it towards a goal by adding and/or sensibly modifying traces. Given that this topic has been scarcely explored, we also introduce a way to evaluate desired characteristics of a model in this context by means of proposing a diversity measure. CICADA is shown to produce sketches of quality comparable to a human user's, enhanced diversity and most importantly to be able to cope with change by continuing the sketch minding the user's contributions in a flexible manner.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46267533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Adaptive 3D Mesh Steganography Based on Feature-Preserving Distortion 基于特征保持失真的自适应三维网格隐写
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-09-19 DOI: 10.48550/arXiv.2209.08884
Yushu Zhang, Jiahao Zhu, Mingfu Xue, Xinpeng Zhang, Xiaochun Cao
{"title":"Adaptive 3D Mesh Steganography Based on Feature-Preserving Distortion","authors":"Yushu Zhang, Jiahao Zhu, Mingfu Xue, Xinpeng Zhang, Xiaochun Cao","doi":"10.48550/arXiv.2209.08884","DOIUrl":"https://doi.org/10.48550/arXiv.2209.08884","url":null,"abstract":"Current 3D mesh steganography algorithms relying on geometric modification are prone to detection by steganalyzers. In traditional steganography, adaptive steganography has proven to be an efficient means of enhancing steganography security. Taking inspiration from this, we propose a highly adaptive embedding algorithm, guided by the principle of minimizing a carefully crafted distortion through efficient steganography codes. Specifically, we tailor a payload-limited embedding optimization problem for 3D settings and devise a feature-preserving distortion (FPD) to measure the impact of message embedding. The distortion takes on an additive form and is defined as a weighted difference of the effective steganalytic subfeatures utilized by the current 3D steganalyzers. With practicality in mind, we refine the distortion to enhance robustness and computational efficiency. By minimizing the FPD, our algorithm can preserve mesh features to a considerable extent, including steganalytic and geometric features, while achieving a high embedding capacity. During the practical embedding phase, we employ the Q-layered syndrome trellis code (STC). However, calculating the bit modification probability (BMP) for each layer of the Q-layered STC, given the variation of Q, can be cumbersome. To address this issue, we design a universal and automatic approach for the BMP calculation. The experimental results demonstrate that our algorithm achieves state-of-the-art performance in countering 3D steganalysis.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43256571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PCDNF: Revisiting Learning-based Point Cloud Denoising via Joint Normal Filtering PCDNF:基于联合法向滤波的重访学习点云去噪
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-09-02 DOI: 10.48550/arXiv.2209.00798
Zheng Liu, Sijing Zhan, Ya-Ou Zhao, Yuanyuan Liu, Renjie Chen, Ying He
{"title":"PCDNF: Revisiting Learning-based Point Cloud Denoising via Joint Normal Filtering","authors":"Zheng Liu, Sijing Zhan, Ya-Ou Zhao, Yuanyuan Liu, Renjie Chen, Ying He","doi":"10.48550/arXiv.2209.00798","DOIUrl":"https://doi.org/10.48550/arXiv.2209.00798","url":null,"abstract":"Point cloud denoising is a fundamental and challenging problem in geometry processing. Existing methods typically involve direct denoising of noisy input or filtering raw normals followed by point position updates. Recognizing the crucial relationship between point cloud denoising and normal filtering, we re-examine this problem from a multitask perspective and propose an end-to-end network called PCDNF for joint normal filtering-based point cloud denoising. We introduce an auxiliary normal filtering task to enhance the network's ability to remove noise while preserving geometric features more accurately. Our network incorporates two novel modules. First, we design a shape-aware selector to improve noise removal performance by constructing latent tangent space representations for specific points, taking into account learned point and normal features as well as geometric priors. Second, we develop a feature refinement module to fuse point and normal features, capitalizing on the strengths of point features in describing geometric details and normal features in representing geometric structures, such as sharp edges and corners. This combination overcomes the limitations of each feature type and better recovers geometric information. Extensive evaluations, comparisons, and ablation studies demonstrate that the proposed method outperforms state-of-the-art approaches in both point cloud denoising and normal filtering.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43075765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信