{"title":"Fusion 3D object tracking method based on region and point cloud registration","authors":"Yixin Jin, Jiawei Zhang, Yinhua Liu, Wei Mo, Hua Chen","doi":"10.1117/1.jei.33.4.043048","DOIUrl":null,"url":null,"abstract":"Tracking rigid objects in three-dimensional (3D) space and 6DoF pose estimating are essential tasks in the field of computer vision. In general, the region-based 3D tracking methods have emerged as the optimal solution for weakly textured objects tracking within intricate scenes in recent years. However, tracking robustness in situations such as partial occlusion and similarly colored backgrounds is relatively poor. To address this issue, an improved region-based tracking method is proposed for achieving accurate 3D object tracking in the presence of partial occlusion and similarly colored backgrounds. First, a regional cost function based on the correspondence line is adopted, and a step function is proposed to alleviate the misclassification of sampling points in scenes. Afterward, in order to reduce the influence of similarly colored background and partial occlusion on the tracking performance, a weight function that fuses color and distance information of the object contour is proposed. Finally, the transformation matrix of the inter-frame motion obtained by the above region-based tracking method is used to initialize the model point cloud, and an improved point cloud registration method is adopted to achieve accurate registration between the model point cloud and the object point cloud to further realize accurate object tracking. The experiments are conducted on the region-based object tracking (RBOT) dataset and the real scenes, respectively. The results demonstrate that the proposed method outperforms the state-of-the-art region-based 3D object tracking method. On the RBOT dataset, the average tracking success rate is improved by 0.5% across five image sequences. In addition, in real scenes with similarly colored backgrounds and partial occlusion, the average tracking accuracy is improved by 0.28 and 0.26 mm, respectively.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":"8 1","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Electronic Imaging","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1117/1.jei.33.4.043048","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Tracking rigid objects in three-dimensional (3D) space and 6DoF pose estimating are essential tasks in the field of computer vision. In general, the region-based 3D tracking methods have emerged as the optimal solution for weakly textured objects tracking within intricate scenes in recent years. However, tracking robustness in situations such as partial occlusion and similarly colored backgrounds is relatively poor. To address this issue, an improved region-based tracking method is proposed for achieving accurate 3D object tracking in the presence of partial occlusion and similarly colored backgrounds. First, a regional cost function based on the correspondence line is adopted, and a step function is proposed to alleviate the misclassification of sampling points in scenes. Afterward, in order to reduce the influence of similarly colored background and partial occlusion on the tracking performance, a weight function that fuses color and distance information of the object contour is proposed. Finally, the transformation matrix of the inter-frame motion obtained by the above region-based tracking method is used to initialize the model point cloud, and an improved point cloud registration method is adopted to achieve accurate registration between the model point cloud and the object point cloud to further realize accurate object tracking. The experiments are conducted on the region-based object tracking (RBOT) dataset and the real scenes, respectively. The results demonstrate that the proposed method outperforms the state-of-the-art region-based 3D object tracking method. On the RBOT dataset, the average tracking success rate is improved by 0.5% across five image sequences. In addition, in real scenes with similarly colored backgrounds and partial occlusion, the average tracking accuracy is improved by 0.28 and 0.26 mm, respectively.
期刊介绍:
The Journal of Electronic Imaging publishes peer-reviewed papers in all technology areas that make up the field of electronic imaging and are normally considered in the design, engineering, and applications of electronic imaging systems.