{"title":"EfficientPEAL: Efficient prior-embedded attention learning for partially overlapping point cloud registration","authors":"Junle Yu , Wenhui Zhou , Zhehao Shen , Yongwei Miao","doi":"10.1016/j.eswa.2025.128591","DOIUrl":null,"url":null,"abstract":"<div><div>Learning discriminative point-wise features is critical for partially overlapping point cloud registration. In recent years, the integration of a Transformer into point cloud feature representation has demonstrated remarkable success, which typically involves a self-attention module to learn intra-point-cloud features, followed by a cross-attention module for feature exchange between input point clouds. Transformer models mainly benefit from the use of self-attention to capture the global correlations in feature space. However, the global correlations involved in self-attention may not only result in a significant amount of redundant computational overhead but also introduce feature ambiguities, especially in low-overlap scenarios. This is because overlapping regions of point clouds typically do not span a wide range but are rather concentrated around a localized area. Therefore, the correlations with an extensive range of non-overlapping points are ineffective and may degrade the discriminability of features. To address this issue, we present a <strong>E</strong>fficient <strong>P</strong>rior-<strong>E</strong>mbedded <strong>A</strong>ttention <strong>L</strong>earning model (<strong>E</strong>fficientPEAL). By incorporating overlap prior to the learning process, the point clouds are divided into two parts. One part includes points lying in the putative overlapping region and the other includes points located in the putative non-overlapping region. Then, EfficientPEAL performs localized attention with the putative overlapping points. The proposed attention module significantly reduces the computational complexity of the model while achieving competitive performance. Extensive experiments on 3DMatch/3DLoMatch, ScanNet, and KITTI datasets demonstrate its effectiveness.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"293 ","pages":"Article 128591"},"PeriodicalIF":7.5000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425022109","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Learning discriminative point-wise features is critical for partially overlapping point cloud registration. In recent years, the integration of a Transformer into point cloud feature representation has demonstrated remarkable success, which typically involves a self-attention module to learn intra-point-cloud features, followed by a cross-attention module for feature exchange between input point clouds. Transformer models mainly benefit from the use of self-attention to capture the global correlations in feature space. However, the global correlations involved in self-attention may not only result in a significant amount of redundant computational overhead but also introduce feature ambiguities, especially in low-overlap scenarios. This is because overlapping regions of point clouds typically do not span a wide range but are rather concentrated around a localized area. Therefore, the correlations with an extensive range of non-overlapping points are ineffective and may degrade the discriminability of features. To address this issue, we present a Efficient Prior-Embedded Attention Learning model (EfficientPEAL). By incorporating overlap prior to the learning process, the point clouds are divided into two parts. One part includes points lying in the putative overlapping region and the other includes points located in the putative non-overlapping region. Then, EfficientPEAL performs localized attention with the putative overlapping points. The proposed attention module significantly reduces the computational complexity of the model while achieving competitive performance. Extensive experiments on 3DMatch/3DLoMatch, ScanNet, and KITTI datasets demonstrate its effectiveness.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.