Marcell Wolnitza, Osman Kaya, T. Kulvicius, F. Wörgötter, B. Dellen
{"title":"用于机器人抓取物体的6D姿态估计和2D物体形状的3D物体重建","authors":"Marcell Wolnitza, Osman Kaya, T. Kulvicius, F. Wörgötter, B. Dellen","doi":"10.1109/IRC55401.2022.00018","DOIUrl":null,"url":null,"abstract":"We propose a method for 3D object reconstruction and 6D pose estimation from 2D images that uses knowledge about object shape as the primary key. In the proposed pipeline, recognition and labeling of objects in 2D images deliver 2D segment silhouettes that are compared with the 2D silhouettes of projections obtained from various views of a 3D model representing the recognized object class. Transformation parameters are computed directly from the 2D images, making the approach feasible. Furthermore, 3D transformations and projective geometry are employed to arrive at a full 3D reconstruction of the object in camera space using a calibrated setup. The method is quantitatively evaluated using synthetic data and tested with real data. In robot experiments, successful grasping of objects demonstrates its usability in real-world environments. The method is applicable to scenarios where 3D object models, e.g., CAD-models or point clouds, are available and precise pixel-wise segmentation maps of 2D images can be obtained. Different from other methods, the method does not use 3D depth for training, widening the domain of application.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"6D pose estimation and 3D object reconstruction from 2D shape for robotic grasping of objects\",\"authors\":\"Marcell Wolnitza, Osman Kaya, T. Kulvicius, F. Wörgötter, B. Dellen\",\"doi\":\"10.1109/IRC55401.2022.00018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a method for 3D object reconstruction and 6D pose estimation from 2D images that uses knowledge about object shape as the primary key. In the proposed pipeline, recognition and labeling of objects in 2D images deliver 2D segment silhouettes that are compared with the 2D silhouettes of projections obtained from various views of a 3D model representing the recognized object class. Transformation parameters are computed directly from the 2D images, making the approach feasible. Furthermore, 3D transformations and projective geometry are employed to arrive at a full 3D reconstruction of the object in camera space using a calibrated setup. The method is quantitatively evaluated using synthetic data and tested with real data. In robot experiments, successful grasping of objects demonstrates its usability in real-world environments. The method is applicable to scenarios where 3D object models, e.g., CAD-models or point clouds, are available and precise pixel-wise segmentation maps of 2D images can be obtained. Different from other methods, the method does not use 3D depth for training, widening the domain of application.\",\"PeriodicalId\":282759,\"journal\":{\"name\":\"2022 Sixth IEEE International Conference on Robotic Computing (IRC)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 Sixth IEEE International Conference on Robotic Computing (IRC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IRC55401.2022.00018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRC55401.2022.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
6D pose estimation and 3D object reconstruction from 2D shape for robotic grasping of objects
We propose a method for 3D object reconstruction and 6D pose estimation from 2D images that uses knowledge about object shape as the primary key. In the proposed pipeline, recognition and labeling of objects in 2D images deliver 2D segment silhouettes that are compared with the 2D silhouettes of projections obtained from various views of a 3D model representing the recognized object class. Transformation parameters are computed directly from the 2D images, making the approach feasible. Furthermore, 3D transformations and projective geometry are employed to arrive at a full 3D reconstruction of the object in camera space using a calibrated setup. The method is quantitatively evaluated using synthetic data and tested with real data. In robot experiments, successful grasping of objects demonstrates its usability in real-world environments. The method is applicable to scenarios where 3D object models, e.g., CAD-models or point clouds, are available and precise pixel-wise segmentation maps of 2D images can be obtained. Different from other methods, the method does not use 3D depth for training, widening the domain of application.