EfficientPEAL: Efficient prior-embedded attention learning for partially overlapping point cloud registration

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Junle Yu , Wenhui Zhou , Zhehao Shen , Yongwei Miao
{"title":"EfficientPEAL: Efficient prior-embedded attention learning for partially overlapping point cloud registration","authors":"Junle Yu ,&nbsp;Wenhui Zhou ,&nbsp;Zhehao Shen ,&nbsp;Yongwei Miao","doi":"10.1016/j.eswa.2025.128591","DOIUrl":null,"url":null,"abstract":"<div><div>Learning discriminative point-wise features is critical for partially overlapping point cloud registration. In recent years, the integration of a Transformer into point cloud feature representation has demonstrated remarkable success, which typically involves a self-attention module to learn intra-point-cloud features, followed by a cross-attention module for feature exchange between input point clouds. Transformer models mainly benefit from the use of self-attention to capture the global correlations in feature space. However, the global correlations involved in self-attention may not only result in a significant amount of redundant computational overhead but also introduce feature ambiguities, especially in low-overlap scenarios. This is because overlapping regions of point clouds typically do not span a wide range but are rather concentrated around a localized area. Therefore, the correlations with an extensive range of non-overlapping points are ineffective and may degrade the discriminability of features. To address this issue, we present a <strong>E</strong>fficient <strong>P</strong>rior-<strong>E</strong>mbedded <strong>A</strong>ttention <strong>L</strong>earning model (<strong>E</strong>fficientPEAL). By incorporating overlap prior to the learning process, the point clouds are divided into two parts. One part includes points lying in the putative overlapping region and the other includes points located in the putative non-overlapping region. Then, EfficientPEAL performs localized attention with the putative overlapping points. The proposed attention module significantly reduces the computational complexity of the model while achieving competitive performance. Extensive experiments on 3DMatch/3DLoMatch, ScanNet, and KITTI datasets demonstrate its effectiveness.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"293 ","pages":"Article 128591"},"PeriodicalIF":7.5000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425022109","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Learning discriminative point-wise features is critical for partially overlapping point cloud registration. In recent years, the integration of a Transformer into point cloud feature representation has demonstrated remarkable success, which typically involves a self-attention module to learn intra-point-cloud features, followed by a cross-attention module for feature exchange between input point clouds. Transformer models mainly benefit from the use of self-attention to capture the global correlations in feature space. However, the global correlations involved in self-attention may not only result in a significant amount of redundant computational overhead but also introduce feature ambiguities, especially in low-overlap scenarios. This is because overlapping regions of point clouds typically do not span a wide range but are rather concentrated around a localized area. Therefore, the correlations with an extensive range of non-overlapping points are ineffective and may degrade the discriminability of features. To address this issue, we present a Efficient Prior-Embedded Attention Learning model (EfficientPEAL). By incorporating overlap prior to the learning process, the point clouds are divided into two parts. One part includes points lying in the putative overlapping region and the other includes points located in the putative non-overlapping region. Then, EfficientPEAL performs localized attention with the putative overlapping points. The proposed attention module significantly reduces the computational complexity of the model while achieving competitive performance. Extensive experiments on 3DMatch/3DLoMatch, ScanNet, and KITTI datasets demonstrate its effectiveness.
高效的先验嵌入注意学习,用于部分重叠点云配准
学习判别性的逐点特征是部分重叠点云配准的关键。近年来,将Transformer集成到点云特征表示中取得了显著的成功,通常包括一个自注意模块来学习点云内部特征,然后是一个交叉注意模块来进行输入点云之间的特征交换。Transformer模型主要受益于使用自关注来捕获特征空间中的全局相关性。然而,涉及自关注的全局相关性不仅会导致大量冗余的计算开销,而且还会引入特征模糊性,特别是在低重叠场景中。这是因为点云的重叠区域通常不会跨越很大的范围,而是集中在一个局部区域周围。因此,与大范围的非重叠点的关联是无效的,并且可能降低特征的可辨别性。为了解决这个问题,我们提出了一个高效先验-嵌入式注意学习模型(EfficientPEAL)。通过在学习过程之前加入重叠,将点云分为两部分。其中一部分包括位于假定重叠区域内的点,另一部分包括位于假定不重叠区域内的点。然后,effentpeal使用假定的重叠点执行局部注意。所提出的注意力模块显著降低了模型的计算复杂度,同时获得了具有竞争力的性能。在3DMatch/3DLoMatch、ScanNet和KITTI数据集上的大量实验证明了其有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Expert Systems with Applications
Expert Systems with Applications 工程技术-工程:电子与电气
CiteScore
13.80
自引率
10.60%
发文量
2045
审稿时长
8.7 months
期刊介绍: Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信