{"title":"Integrating Linear Skip-Attention With Transformer-Based Network of Multi-Level Features Extraction for Partial Point Cloud Registration","authors":"Qinyu He, Tao Sun","doi":"10.1049/ipr2.70055","DOIUrl":null,"url":null,"abstract":"<p>Accurate point correspondences is critical for rigid point cloud registration in correspondence-based methods. Many previous learning-based methods employ encoder-decoder backbone for point feature extraction, while applying attention mechanism for sparse superpoints to deal with the partial overlap situation. However, few of these methods focus on the intermediate layers yet mainly pay attention on the top-most patch features, thus neglecting multi-faceted feature perspectives leading to potential overlap areas estimation inaccuracy. Meanwhile, obtaining correct correspondences is usually interfered with the one-to-many case and outliers. To address these issues, we propose a multi-level features extraction network with integrating linear dual attention mechanism into skip-connection stage of encoder-decoder backbone, both efficiently suppressing irrelevant information and guiding residual features to learn the common regions on which the network should focus to tackle the overlap estimation inaccuracy issue, combined with a parallel-structured decoder forming distinguishable features and potential overlapping regions. Additionally, a two-stage correspondences pruning process is designed to tackle the mismatch issue, which mainly depends on the rigid geometric constraint. Extensive experiments conducted on indoor and outdoor scene datasets demonstrate our method's accuracy and stability, by outperforming state-of-the-art methods on registration recall.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70055","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Image Processing","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ipr2.70055","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Accurate point correspondences is critical for rigid point cloud registration in correspondence-based methods. Many previous learning-based methods employ encoder-decoder backbone for point feature extraction, while applying attention mechanism for sparse superpoints to deal with the partial overlap situation. However, few of these methods focus on the intermediate layers yet mainly pay attention on the top-most patch features, thus neglecting multi-faceted feature perspectives leading to potential overlap areas estimation inaccuracy. Meanwhile, obtaining correct correspondences is usually interfered with the one-to-many case and outliers. To address these issues, we propose a multi-level features extraction network with integrating linear dual attention mechanism into skip-connection stage of encoder-decoder backbone, both efficiently suppressing irrelevant information and guiding residual features to learn the common regions on which the network should focus to tackle the overlap estimation inaccuracy issue, combined with a parallel-structured decoder forming distinguishable features and potential overlapping regions. Additionally, a two-stage correspondences pruning process is designed to tackle the mismatch issue, which mainly depends on the rigid geometric constraint. Extensive experiments conducted on indoor and outdoor scene datasets demonstrate our method's accuracy and stability, by outperforming state-of-the-art methods on registration recall.
期刊介绍:
The IET Image Processing journal encompasses research areas related to the generation, processing and communication of visual information. The focus of the journal is the coverage of the latest research results in image and video processing, including image generation and display, enhancement and restoration, segmentation, colour and texture analysis, coding and communication, implementations and architectures as well as innovative applications.
Principal topics include:
Generation and Display - Imaging sensors and acquisition systems, illumination, sampling and scanning, quantization, colour reproduction, image rendering, display and printing systems, evaluation of image quality.
Processing and Analysis - Image enhancement, restoration, segmentation, registration, multispectral, colour and texture processing, multiresolution processing and wavelets, morphological operations, stereoscopic and 3-D processing, motion detection and estimation, video and image sequence processing.
Implementations and Architectures - Image and video processing hardware and software, design and construction, architectures and software, neural, adaptive, and fuzzy processing.
Coding and Transmission - Image and video compression and coding, compression standards, noise modelling, visual information networks, streamed video.
Retrieval and Multimedia - Storage of images and video, database design, image retrieval, video annotation and editing, mixed media incorporating visual information, multimedia systems and applications, image and video watermarking, steganography.
Applications - Innovative application of image and video processing technologies to any field, including life sciences, earth sciences, astronomy, document processing and security.
Current Special Issue Call for Papers:
Evolutionary Computation for Image Processing - https://digital-library.theiet.org/files/IET_IPR_CFP_EC.pdf
AI-Powered 3D Vision - https://digital-library.theiet.org/files/IET_IPR_CFP_AIPV.pdf
Multidisciplinary advancement of Imaging Technologies: From Medical Diagnostics and Genomics to Cognitive Machine Vision, and Artificial Intelligence - https://digital-library.theiet.org/files/IET_IPR_CFP_IST.pdf
Deep Learning for 3D Reconstruction - https://digital-library.theiet.org/files/IET_IPR_CFP_DLR.pdf