Liangrui Wei, Feifei Xie, Lin Sun, Jinpeng Chen, Zhipeng Zhang
{"title":"具有双重关注机制的模态融合网络用于 6D 姿态估计","authors":"Liangrui Wei, Feifei Xie, Lin Sun, Jinpeng Chen, Zhipeng Zhang","doi":"10.1007/s00371-024-03614-w","DOIUrl":null,"url":null,"abstract":"<p>The 6D pose estimation based on RGB-D data holds significant application value in computer vision and related fields. Currently, deep learning methods commonly employ convolutional networks for feature extraction, which are sensitive to keypoints at close distances but overlook information related to keypoints at longer distances. Moreover, in subsequent stages, there is a failure to effectively fuse spatial features (depth channel features) and color texture features (RGB channel features). Consequently, this limitation results in compromised accuracy in existing 6D pose networks based on RGB-D data. To solve this problem, a novel end-to-end 6D pose estimation network is proposed. In the branch of depth data processing network, the global spatial weight is established by using the attention mechanism of mask vector to realize robust extraction of depth features. In the phase of feature fusion, a symmetric fusion module is introduced. In this module, spatial features and color texture features are self-related fused by means of cross-attention mechanism. Experimental evaluations were performed on the LINEMOD and LINEMOD-OCLUSION datasets, and the ADD(-S) scores of our method can reach 95.84% and 47.89%, respectively. Compared to state-of-the-art methods, our method demonstrates superior performance in pose estimation for objects with complex shapes. Moreover, in the presence of occlusion, the pose estimation accuracy of our method for asymmetric objects has been effectively improved.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A modal fusion network with dual attention mechanism for 6D pose estimation\",\"authors\":\"Liangrui Wei, Feifei Xie, Lin Sun, Jinpeng Chen, Zhipeng Zhang\",\"doi\":\"10.1007/s00371-024-03614-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The 6D pose estimation based on RGB-D data holds significant application value in computer vision and related fields. Currently, deep learning methods commonly employ convolutional networks for feature extraction, which are sensitive to keypoints at close distances but overlook information related to keypoints at longer distances. Moreover, in subsequent stages, there is a failure to effectively fuse spatial features (depth channel features) and color texture features (RGB channel features). Consequently, this limitation results in compromised accuracy in existing 6D pose networks based on RGB-D data. To solve this problem, a novel end-to-end 6D pose estimation network is proposed. In the branch of depth data processing network, the global spatial weight is established by using the attention mechanism of mask vector to realize robust extraction of depth features. In the phase of feature fusion, a symmetric fusion module is introduced. In this module, spatial features and color texture features are self-related fused by means of cross-attention mechanism. Experimental evaluations were performed on the LINEMOD and LINEMOD-OCLUSION datasets, and the ADD(-S) scores of our method can reach 95.84% and 47.89%, respectively. Compared to state-of-the-art methods, our method demonstrates superior performance in pose estimation for objects with complex shapes. Moreover, in the presence of occlusion, the pose estimation accuracy of our method for asymmetric objects has been effectively improved.</p>\",\"PeriodicalId\":501186,\"journal\":{\"name\":\"The Visual Computer\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Visual Computer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00371-024-03614-w\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03614-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A modal fusion network with dual attention mechanism for 6D pose estimation
The 6D pose estimation based on RGB-D data holds significant application value in computer vision and related fields. Currently, deep learning methods commonly employ convolutional networks for feature extraction, which are sensitive to keypoints at close distances but overlook information related to keypoints at longer distances. Moreover, in subsequent stages, there is a failure to effectively fuse spatial features (depth channel features) and color texture features (RGB channel features). Consequently, this limitation results in compromised accuracy in existing 6D pose networks based on RGB-D data. To solve this problem, a novel end-to-end 6D pose estimation network is proposed. In the branch of depth data processing network, the global spatial weight is established by using the attention mechanism of mask vector to realize robust extraction of depth features. In the phase of feature fusion, a symmetric fusion module is introduced. In this module, spatial features and color texture features are self-related fused by means of cross-attention mechanism. Experimental evaluations were performed on the LINEMOD and LINEMOD-OCLUSION datasets, and the ADD(-S) scores of our method can reach 95.84% and 47.89%, respectively. Compared to state-of-the-art methods, our method demonstrates superior performance in pose estimation for objects with complex shapes. Moreover, in the presence of occlusion, the pose estimation accuracy of our method for asymmetric objects has been effectively improved.