Invariant Feature Extraction Functions for UME-Based Point Cloud Detection and Registration

Amit Efraim;Yuval Haitman;Joseph M. Francos
{"title":"Invariant Feature Extraction Functions for UME-Based Point Cloud Detection and Registration","authors":"Amit Efraim;Yuval Haitman;Joseph M. Francos","doi":"10.1109/TIP.2025.3570628","DOIUrl":null,"url":null,"abstract":"Point clouds are unordered sets of coordinates in 3D with no functional relation imposed on them. The Rigid Transformation Universal Manifold Embedding (RTUME) is a mapping of volumetric or surface measurements on a 3D object to matrices, such that when two observations on the same object are related by a rigid transformation, this relation is preserved between their corresponding RTUME matrices, thus providing linear and robust solution to the registration and detection problems. To make the RTUME framework of 3D object detection and registration applicable for processing point cloud observations, there is a need to define a function that assigns each point in the cloud with a value (feature vector), invariant to the action of the transformation group. Since existing feature extraction functions do not achieve the desired level of invariance to rigid transformations, to the variability of sampling patterns, and to model mismatches, we present a novel approach for designing dense feature extraction functions, compatible with the requirements of the RTUME framework. One possible implementation of the approach is to adapt existing feature extracting functions, whether learned or analytic, designed for the estimation of point correspondences, to the RTUME framework. The novel feature-extracting function design employs integration over <inline-formula> <tex-math>$SO(3)$ </tex-math></inline-formula> to marginalize the pose dependency of extracted features, followed by projecting features between point clouds using nearest neighbor projection to overcome other sources of model mismatch. In addition, the non-linear functions that define the RTUME mapping are optimized using an MLP model, trained to minimize the RTUME registration errors. The overall RTUME registration performance is evaluated using standard registration benchmarks, and is shown to outperform existing SOTA methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3209-3224"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11008823/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Point clouds are unordered sets of coordinates in 3D with no functional relation imposed on them. The Rigid Transformation Universal Manifold Embedding (RTUME) is a mapping of volumetric or surface measurements on a 3D object to matrices, such that when two observations on the same object are related by a rigid transformation, this relation is preserved between their corresponding RTUME matrices, thus providing linear and robust solution to the registration and detection problems. To make the RTUME framework of 3D object detection and registration applicable for processing point cloud observations, there is a need to define a function that assigns each point in the cloud with a value (feature vector), invariant to the action of the transformation group. Since existing feature extraction functions do not achieve the desired level of invariance to rigid transformations, to the variability of sampling patterns, and to model mismatches, we present a novel approach for designing dense feature extraction functions, compatible with the requirements of the RTUME framework. One possible implementation of the approach is to adapt existing feature extracting functions, whether learned or analytic, designed for the estimation of point correspondences, to the RTUME framework. The novel feature-extracting function design employs integration over $SO(3)$ to marginalize the pose dependency of extracted features, followed by projecting features between point clouds using nearest neighbor projection to overcome other sources of model mismatch. In addition, the non-linear functions that define the RTUME mapping are optimized using an MLP model, trained to minimize the RTUME registration errors. The overall RTUME registration performance is evaluated using standard registration benchmarks, and is shown to outperform existing SOTA methods.
基于微元的点云检测与配准的不变特征提取函数
点云是三维中无序的坐标集合,没有强加于它们的函数关系。刚性变换通用流形嵌入(RTUME)是一种将三维物体上的体积或表面测量值映射到矩阵的方法,当同一物体上的两个观测值通过刚性变换相关联时,它们对应的RTUME矩阵之间保持这种关系,从而为配准和检测问题提供线性和鲁棒的解决方案。为了使三维物体检测和配准的RTUME框架适用于处理点云观测,需要定义一个函数,该函数为云中的每个点赋值(特征向量),该值与变换组的动作保持不变。由于现有的特征提取函数在刚性转换、采样模式的可变性和模型不匹配方面没有达到期望的不变性水平,我们提出了一种新的方法来设计密集的特征提取函数,与RTUME框架的要求兼容。该方法的一种可能实现是将现有的用于点对应估计的特征提取函数(无论是学习的还是分析的)适应于RTUME框架。新的特征提取函数设计采用超过$SO(3)$的积分来边缘化提取特征的姿态依赖,然后使用最近邻投影在点云之间投影特征以克服模型不匹配的其他来源。此外,定义RTUME映射的非线性函数使用MLP模型进行优化,训练以最小化RTUME注册错误。总体RTUME注册性能使用标准注册基准进行评估,并显示优于现有的SOTA方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信