{"title":"Graduated non-convex feature-metric-based 6D object pose refinement via deep reinforcement learning","authors":"Peiyuan Ni, Marcelo H. Ang Jr","doi":"10.1016/j.robot.2025.105177","DOIUrl":null,"url":null,"abstract":"<div><div>Recently, many works focus on 6D object pose refinement with a single RGB image. Most of them apply the differentiable Levenberg–Marquardt (LM) algorithm as the solver. However, they may easily ignore the importance of the damping parameter denoted by <span><math><mi>λ</mi></math></span>, which affects the accuracy and efficiency of prediction. In this paper, we present a coarse-to-fine feature-metric-based 6D object pose refinement framework, which utilizes the intermediate layers to predict <span><math><mi>λ</mi></math></span> combined with Region of Interest (ROI) alignment and eigenvalues. To facilitate better convergence during the training process, we propose to leverage graduated non-convexity (GNC) to handle uncertainty and feature residual learning in a pixel-level manner. Moreover, current works have not analyzed the control process during the whole iteration process. We propose to use deep reinforcement learning to fit this non-differentiable process, which can reduce redundant steps during the prediction stage. Finally, with a Transformer-based backbone, our algorithm with no iteration control learning (ICL) achieves better performance with Shape-constraint Recurrent Flow (SRF, state-of-the-art object pose refinement method) (Hai et al. 2023) on Linear Model for Object Detection (LineMOD), LineMOD Occlusion and YCB-Video datasets. Moreover, our full algorithm with VGG-16 as the backbone, accelerated with TensorRT, runs at about 94 FPS. It exhibits superior speed compared to RePose (Iwase et al. 2021), and notably surpasses its accuracy, especially for initial poses with large errors. The code will be available at <span><span>https://github.com/NiPeiyuan/EARePOSE.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105177"},"PeriodicalIF":5.2000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092188902500274X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, many works focus on 6D object pose refinement with a single RGB image. Most of them apply the differentiable Levenberg–Marquardt (LM) algorithm as the solver. However, they may easily ignore the importance of the damping parameter denoted by , which affects the accuracy and efficiency of prediction. In this paper, we present a coarse-to-fine feature-metric-based 6D object pose refinement framework, which utilizes the intermediate layers to predict combined with Region of Interest (ROI) alignment and eigenvalues. To facilitate better convergence during the training process, we propose to leverage graduated non-convexity (GNC) to handle uncertainty and feature residual learning in a pixel-level manner. Moreover, current works have not analyzed the control process during the whole iteration process. We propose to use deep reinforcement learning to fit this non-differentiable process, which can reduce redundant steps during the prediction stage. Finally, with a Transformer-based backbone, our algorithm with no iteration control learning (ICL) achieves better performance with Shape-constraint Recurrent Flow (SRF, state-of-the-art object pose refinement method) (Hai et al. 2023) on Linear Model for Object Detection (LineMOD), LineMOD Occlusion and YCB-Video datasets. Moreover, our full algorithm with VGG-16 as the backbone, accelerated with TensorRT, runs at about 94 FPS. It exhibits superior speed compared to RePose (Iwase et al. 2021), and notably surpasses its accuracy, especially for initial poses with large errors. The code will be available at https://github.com/NiPeiyuan/EARePOSE.git.
最近,许多工作都集中在用单个RGB图像进行6D物体姿态的细化。它们大多采用可微Levenberg-Marquardt (LM)算法作为求解器。然而,它们很容易忽略以λ表示的阻尼参数的重要性,从而影响预测的准确性和效率。在本文中,我们提出了一个基于粗到细特征度量的6D目标姿态优化框架,该框架利用中间层结合感兴趣区域(ROI)对齐和特征值来预测λ。为了在训练过程中更好地收敛,我们提出利用渐进式非凸性(GNC)以像素级的方式处理不确定性和特征残差学习。而且,目前的工作还没有对整个迭代过程中的控制过程进行分析。我们建议使用深度强化学习来拟合这个不可微的过程,这可以减少预测阶段的冗余步骤。最后,通过基于transformer的主干,我们的无迭代控制学习(ICL)算法在用于对象检测(LineMOD)、LineMOD Occlusion和YCB-Video数据集的线性模型(Linear Model for object Detection, LineMOD)上与形状约束循环流(SRF,最先进的对象姿态改进方法)(Hai et al. 2023)上取得了更好的性能。此外,我们的完整算法以VGG-16为骨干,用TensorRT加速,运行速度约为94 FPS。与RePose相比,它具有更高的速度(Iwase et al. 2021),并且明显超过了RePose的精度,特别是对于误差较大的初始姿态。代码可在https://github.com/NiPeiyuan/EARePOSE.git上获得。
期刊介绍:
Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.
Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.