Markus Vincze, Jean-Baptiste Weibel, Stefan Thalhammer, Hrishikesh Gupta, Philipp Ausserlechner
{"title":"[Recognizing transparent objects for laboratory automation].","authors":"Markus Vincze, Jean-Baptiste Weibel, Stefan Thalhammer, Hrishikesh Gupta, Philipp Ausserlechner","doi":"10.1007/s00502-023-01158-w","DOIUrl":null,"url":null,"abstract":"<p><p>While matte objects can be visually recognized well and grasped with robots, transparent objects pose new challenges. Modern color and depth cameras (RGB-D) do not deliver correct depth data but distorted images of the background. In this paper, we show which methods are suitable to detect transparent objects in color images only and to determine their pose. Using a robotic system, views of the targeted object are generated and annotated to learn methods and to obtain data for evaluation. We also show that by using an improved method for fitting the 3D pose, a significant improvement in the accuracy of pose estimation is achieved. Thus, false detections can be eliminated and for correct detections the accuracy of pose estimation is improved. This makes it possible to grasp transparent objects with a robot.</p>","PeriodicalId":93547,"journal":{"name":"Elektrotechnik und Informationstechnik : E & I","volume":"140 6","pages":"519-529"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10584713/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Elektrotechnik und Informationstechnik : E & I","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00502-023-01158-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/9/12 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
While matte objects can be visually recognized well and grasped with robots, transparent objects pose new challenges. Modern color and depth cameras (RGB-D) do not deliver correct depth data but distorted images of the background. In this paper, we show which methods are suitable to detect transparent objects in color images only and to determine their pose. Using a robotic system, views of the targeted object are generated and annotated to learn methods and to obtain data for evaluation. We also show that by using an improved method for fitting the 3D pose, a significant improvement in the accuracy of pose estimation is achieved. Thus, false detections can be eliminated and for correct detections the accuracy of pose estimation is improved. This makes it possible to grasp transparent objects with a robot.