L. Lefakis, H. Wildenauer, Manuel Pascual Garcia-Tubio, L. Szumilas
{"title":"用于抓点检测的增强边缘方向直方图","authors":"L. Lefakis, H. Wildenauer, Manuel Pascual Garcia-Tubio, L. Szumilas","doi":"10.1109/ICPR.2010.990","DOIUrl":null,"url":null,"abstract":"In this paper, we describe a novel algorithm for the detection of grasping points in images of previously unseen objects. A basic building block of our approach is the use of a newly devised descriptor, representing semi-local grasping point shape by the use edge orientation histograms. Combined with boosting, our method learns discriminative grasp point models for new objects from a set of annotated real-world images. The method has been extensively evaluated on challenging images of real scenes, exhibiting largely varying characteristics concerning illumination conditions, scene complexity, and viewpoint. Our experiments show that the method works in a stable manner and that its performance compares favorably to the state-of-the-art.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Boosted Edge Orientation Histograms for Grasping Point Detection\",\"authors\":\"L. Lefakis, H. Wildenauer, Manuel Pascual Garcia-Tubio, L. Szumilas\",\"doi\":\"10.1109/ICPR.2010.990\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we describe a novel algorithm for the detection of grasping points in images of previously unseen objects. A basic building block of our approach is the use of a newly devised descriptor, representing semi-local grasping point shape by the use edge orientation histograms. Combined with boosting, our method learns discriminative grasp point models for new objects from a set of annotated real-world images. The method has been extensively evaluated on challenging images of real scenes, exhibiting largely varying characteristics concerning illumination conditions, scene complexity, and viewpoint. Our experiments show that the method works in a stable manner and that its performance compares favorably to the state-of-the-art.\",\"PeriodicalId\":309591,\"journal\":{\"name\":\"2010 20th International Conference on Pattern Recognition\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 20th International Conference on Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPR.2010.990\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 20th International Conference on Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPR.2010.990","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Boosted Edge Orientation Histograms for Grasping Point Detection
In this paper, we describe a novel algorithm for the detection of grasping points in images of previously unseen objects. A basic building block of our approach is the use of a newly devised descriptor, representing semi-local grasping point shape by the use edge orientation histograms. Combined with boosting, our method learns discriminative grasp point models for new objects from a set of annotated real-world images. The method has been extensively evaluated on challenging images of real scenes, exhibiting largely varying characteristics concerning illumination conditions, scene complexity, and viewpoint. Our experiments show that the method works in a stable manner and that its performance compares favorably to the state-of-the-art.