Edgar Cortés-Gallardo, C. Moreno-García, Alfredo Zhu, Daniela. Chípuli-Silva, J. A. Gonzalez-Gonzalez, Domenico. Morales-Ortiz, Sebastian Fernandez, Bernardo. Urriza, Juan. Valverde-López, Arath Marín, Hugo Pérez, J. Izquierdo-Reyes, Rogelio Bustamante-Bello
{"title":"自动驾驶汽车架构全景拼接特征提取器的比较","authors":"Edgar Cortés-Gallardo, C. Moreno-García, Alfredo Zhu, Daniela. Chípuli-Silva, J. A. Gonzalez-Gonzalez, Domenico. Morales-Ortiz, Sebastian Fernandez, Bernardo. Urriza, Juan. Valverde-López, Arath Marín, Hugo Pérez, J. Izquierdo-Reyes, Rogelio Bustamante-Bello","doi":"10.1109/ICMEAE.2019.00017","DOIUrl":null,"url":null,"abstract":"Panorama stitching consists on frames being merged to create a 360° view. This technique is proposed for its implementation in autonomous vehicles instead of the use of an external 360-degree camera, mostly due to its reduced cost and improved aerodynamics. This strategy requires a fast and robust set of features to be extracted from the images obtained by the cameras located around the inside of the car, in order to effectively compute the panoramic view in real time and avoid hazards on the road. This paper compares and creates discussion of three feature extraction methods (i.e. SIFT, BRISK and SURF) for image feature extraction, in order to decide which one is more suitable for a panorama stitching application in an autonomous car architecture. Experimental validation shows that SURF exhibits an improved performance under a variety of image transformations, and thus appears to be the most suitable of these three methods, given its accuracy when comparing features between both images, while maintaining a low time consumption. Furthermore, a comparison of the results obtained with respect to similar work allows us to increase the reliability of our methodology and the reach of our conclusions.","PeriodicalId":422872,"journal":{"name":"2019 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"A Comparison of Feature Extractors for Panorama Stitching in an Autonomous Car Architecture\",\"authors\":\"Edgar Cortés-Gallardo, C. Moreno-García, Alfredo Zhu, Daniela. Chípuli-Silva, J. A. Gonzalez-Gonzalez, Domenico. Morales-Ortiz, Sebastian Fernandez, Bernardo. Urriza, Juan. Valverde-López, Arath Marín, Hugo Pérez, J. Izquierdo-Reyes, Rogelio Bustamante-Bello\",\"doi\":\"10.1109/ICMEAE.2019.00017\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Panorama stitching consists on frames being merged to create a 360° view. This technique is proposed for its implementation in autonomous vehicles instead of the use of an external 360-degree camera, mostly due to its reduced cost and improved aerodynamics. This strategy requires a fast and robust set of features to be extracted from the images obtained by the cameras located around the inside of the car, in order to effectively compute the panoramic view in real time and avoid hazards on the road. This paper compares and creates discussion of three feature extraction methods (i.e. SIFT, BRISK and SURF) for image feature extraction, in order to decide which one is more suitable for a panorama stitching application in an autonomous car architecture. Experimental validation shows that SURF exhibits an improved performance under a variety of image transformations, and thus appears to be the most suitable of these three methods, given its accuracy when comparing features between both images, while maintaining a low time consumption. Furthermore, a comparison of the results obtained with respect to similar work allows us to increase the reliability of our methodology and the reach of our conclusions.\",\"PeriodicalId\":422872,\"journal\":{\"name\":\"2019 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMEAE.2019.00017\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMEAE.2019.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Comparison of Feature Extractors for Panorama Stitching in an Autonomous Car Architecture
Panorama stitching consists on frames being merged to create a 360° view. This technique is proposed for its implementation in autonomous vehicles instead of the use of an external 360-degree camera, mostly due to its reduced cost and improved aerodynamics. This strategy requires a fast and robust set of features to be extracted from the images obtained by the cameras located around the inside of the car, in order to effectively compute the panoramic view in real time and avoid hazards on the road. This paper compares and creates discussion of three feature extraction methods (i.e. SIFT, BRISK and SURF) for image feature extraction, in order to decide which one is more suitable for a panorama stitching application in an autonomous car architecture. Experimental validation shows that SURF exhibits an improved performance under a variety of image transformations, and thus appears to be the most suitable of these three methods, given its accuracy when comparing features between both images, while maintaining a low time consumption. Furthermore, a comparison of the results obtained with respect to similar work allows us to increase the reliability of our methodology and the reach of our conclusions.