Wen-Xuan Chen , Yong Hu , Bei-Yi Tian , Wen Luo , Lin-Wang Yuan
{"title":"Multi-Level Cross-Attention Point Cloud Completion Network","authors":"Wen-Xuan Chen , Yong Hu , Bei-Yi Tian , Wen Luo , Lin-Wang Yuan","doi":"10.1016/j.cag.2025.104253","DOIUrl":null,"url":null,"abstract":"<div><div>Sensors often acquire point cloud data that is sparse and incomplete due to limitations in resolution or occlusion. Therefore, it is essential for practical applications to reconstruct the original shape from the incomplete point cloud. However, the existing methods based on Transformer architecture fail to make full use of the cross-attention mechanism to extract and fuse the features of points and the relationship between points, leading to a deficiency in detailed feature representation. In this paper, we present Multi-Level Cross-Attention Point Cloud Completion Network (MLCANet), which leverages the multi-level features of point clouds and their feature associations to optimize the generation of points. First, MLCANet enhances the features of the point cloud through Multi-Scale Feature Enhancement Cross-Attention (MSFECA) within the encoder. This approach facilitates interaction between channel and spatial dimension information derived from both low-resolution and high-resolution point clouds. Second, we propose Structural Similarity Cross-Attention (SSCA) in the decoder to learn prior knowledge from partial point clouds, thereby improving detail recovery. Third, we present an Augmented Affiliation Transformation (AAT) designed to correct positional discrepancies between partial and missing points. Our experiments demonstrate the effectiveness of our method for completing several challenging point cloud data both qualitatively and quantitatively, with the Chamfer Distance (CD) reduced by at least 3.1% and 4.7% compared to existing methods on the ShapeNet-Part and ModelNet40 datasets.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104253"},"PeriodicalIF":2.5000,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849325000949","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Sensors often acquire point cloud data that is sparse and incomplete due to limitations in resolution or occlusion. Therefore, it is essential for practical applications to reconstruct the original shape from the incomplete point cloud. However, the existing methods based on Transformer architecture fail to make full use of the cross-attention mechanism to extract and fuse the features of points and the relationship between points, leading to a deficiency in detailed feature representation. In this paper, we present Multi-Level Cross-Attention Point Cloud Completion Network (MLCANet), which leverages the multi-level features of point clouds and their feature associations to optimize the generation of points. First, MLCANet enhances the features of the point cloud through Multi-Scale Feature Enhancement Cross-Attention (MSFECA) within the encoder. This approach facilitates interaction between channel and spatial dimension information derived from both low-resolution and high-resolution point clouds. Second, we propose Structural Similarity Cross-Attention (SSCA) in the decoder to learn prior knowledge from partial point clouds, thereby improving detail recovery. Third, we present an Augmented Affiliation Transformation (AAT) designed to correct positional discrepancies between partial and missing points. Our experiments demonstrate the effectiveness of our method for completing several challenging point cloud data both qualitatively and quantitatively, with the Chamfer Distance (CD) reduced by at least 3.1% and 4.7% compared to existing methods on the ShapeNet-Part and ModelNet40 datasets.
期刊介绍:
Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on:
1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains.
2. State-of-the-art papers on late-breaking, cutting-edge research on CG.
3. Information on innovative uses of graphics principles and technologies.
4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.