{"title":"基于虚线段的霍夫变换","authors":"Ji Y. Chang, A. Hanson","doi":"10.1109/ICPR.1994.576226","DOIUrl":null,"url":null,"abstract":"The generalized Hough transform (GHough) is a useful technique for detecting and locating 2D shapes. However, GHough requires a 4D accumulator array to detect objects of unknown scale and orientation. In this paper, we propose an extension of GHough, the virtual line segment-based Hough transform (VHough) that requires much less storage than GHough to accurately determine the scale and orientation of an object instance. VHough takes O(N/sup 2/) time, where N is the number of edge pixels in an image, but requires only 2D accumulator array for the detection of arbitrarily rotated and scaled objects. We present an experimental result to show that VHough is well-suited to recognition tasks when no a priori knowledge about parameters is available.","PeriodicalId":312019,"journal":{"name":"Proceedings of 12th International Conference on Pattern Recognition","volume":"22 5-6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Virtual line segment-based Hough transform\",\"authors\":\"Ji Y. Chang, A. Hanson\",\"doi\":\"10.1109/ICPR.1994.576226\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The generalized Hough transform (GHough) is a useful technique for detecting and locating 2D shapes. However, GHough requires a 4D accumulator array to detect objects of unknown scale and orientation. In this paper, we propose an extension of GHough, the virtual line segment-based Hough transform (VHough) that requires much less storage than GHough to accurately determine the scale and orientation of an object instance. VHough takes O(N/sup 2/) time, where N is the number of edge pixels in an image, but requires only 2D accumulator array for the detection of arbitrarily rotated and scaled objects. We present an experimental result to show that VHough is well-suited to recognition tasks when no a priori knowledge about parameters is available.\",\"PeriodicalId\":312019,\"journal\":{\"name\":\"Proceedings of 12th International Conference on Pattern Recognition\",\"volume\":\"22 5-6\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of 12th International Conference on Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPR.1994.576226\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 12th International Conference on Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPR.1994.576226","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The generalized Hough transform (GHough) is a useful technique for detecting and locating 2D shapes. However, GHough requires a 4D accumulator array to detect objects of unknown scale and orientation. In this paper, we propose an extension of GHough, the virtual line segment-based Hough transform (VHough) that requires much less storage than GHough to accurately determine the scale and orientation of an object instance. VHough takes O(N/sup 2/) time, where N is the number of edge pixels in an image, but requires only 2D accumulator array for the detection of arbitrarily rotated and scaled objects. We present an experimental result to show that VHough is well-suited to recognition tasks when no a priori knowledge about parameters is available.