M. S. Alam, Malik Morshidi, T. Gunawan, R. F. Olanrewaju
{"title":"增强现实应用中特征提取算法的比较分析","authors":"M. S. Alam, Malik Morshidi, T. Gunawan, R. F. Olanrewaju","doi":"10.1109/ICSIMA50015.2021.9526295","DOIUrl":null,"url":null,"abstract":"The algorithms based on image feature detection and matching are critical in the field of computer vision. Feature extraction and matching algorithms are used in many computer vision problems, including object recognition and structure from motion. Each feature detector and descriptor algorithm’s computational efficiency and robust performance have a major impact on image matching precision and time utilization. The performance of image matching algorithms that use expensive descriptors for detection and matching is addressed using existing approaches. The algorithm’s efficiency is measured by the number of matches found and the number of faults discovered when evaluated against a given pair of images. It also depends on the algorithm that detects the features and matches them in less amount of time. This paper examines and compares the different algorithm (SURF, ORB, BRISK, FAST, KAZE, MINEIGEN, MSER) performances using distinct parameters such as affine transformation, blur, scale, illumination, and rotation. The Oxford dataset is used to assess their robustness and efficiency against the parameters of interest. The time taken to detect features, the time taken to match images, the number of identified feature points, and the total running time is recorded in this study. The quantitative results show that the ORB and SURF algorithms detect and match more features than other algorithms. Furthermore, they are computationally less expensive and robust compared to other algorithms. In addition, the robustness of ORB and SURF is quite high in terms of outliers, and the amount of time taken to match with the reference is also significantly less. However, the efficiency of SURF reduces against blur transformation. FAST is good in detecting corners but lacks efficiency under different transformations. Experiments show that when each algorithm is subjected to numerous alterations, it has its own set of advantages and limitations.","PeriodicalId":404811,"journal":{"name":"2021 IEEE 7th International Conference on Smart Instrumentation, Measurement and Applications (ICSIMA)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Comparative Analysis of Feature Extraction Algorithms for Augmented Reality Applications\",\"authors\":\"M. S. Alam, Malik Morshidi, T. Gunawan, R. F. Olanrewaju\",\"doi\":\"10.1109/ICSIMA50015.2021.9526295\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The algorithms based on image feature detection and matching are critical in the field of computer vision. Feature extraction and matching algorithms are used in many computer vision problems, including object recognition and structure from motion. Each feature detector and descriptor algorithm’s computational efficiency and robust performance have a major impact on image matching precision and time utilization. The performance of image matching algorithms that use expensive descriptors for detection and matching is addressed using existing approaches. The algorithm’s efficiency is measured by the number of matches found and the number of faults discovered when evaluated against a given pair of images. It also depends on the algorithm that detects the features and matches them in less amount of time. This paper examines and compares the different algorithm (SURF, ORB, BRISK, FAST, KAZE, MINEIGEN, MSER) performances using distinct parameters such as affine transformation, blur, scale, illumination, and rotation. The Oxford dataset is used to assess their robustness and efficiency against the parameters of interest. The time taken to detect features, the time taken to match images, the number of identified feature points, and the total running time is recorded in this study. The quantitative results show that the ORB and SURF algorithms detect and match more features than other algorithms. Furthermore, they are computationally less expensive and robust compared to other algorithms. In addition, the robustness of ORB and SURF is quite high in terms of outliers, and the amount of time taken to match with the reference is also significantly less. However, the efficiency of SURF reduces against blur transformation. FAST is good in detecting corners but lacks efficiency under different transformations. Experiments show that when each algorithm is subjected to numerous alterations, it has its own set of advantages and limitations.\",\"PeriodicalId\":404811,\"journal\":{\"name\":\"2021 IEEE 7th International Conference on Smart Instrumentation, Measurement and Applications (ICSIMA)\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 7th International Conference on Smart Instrumentation, Measurement and Applications (ICSIMA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSIMA50015.2021.9526295\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 7th International Conference on Smart Instrumentation, Measurement and Applications (ICSIMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSIMA50015.2021.9526295","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Comparative Analysis of Feature Extraction Algorithms for Augmented Reality Applications
The algorithms based on image feature detection and matching are critical in the field of computer vision. Feature extraction and matching algorithms are used in many computer vision problems, including object recognition and structure from motion. Each feature detector and descriptor algorithm’s computational efficiency and robust performance have a major impact on image matching precision and time utilization. The performance of image matching algorithms that use expensive descriptors for detection and matching is addressed using existing approaches. The algorithm’s efficiency is measured by the number of matches found and the number of faults discovered when evaluated against a given pair of images. It also depends on the algorithm that detects the features and matches them in less amount of time. This paper examines and compares the different algorithm (SURF, ORB, BRISK, FAST, KAZE, MINEIGEN, MSER) performances using distinct parameters such as affine transformation, blur, scale, illumination, and rotation. The Oxford dataset is used to assess their robustness and efficiency against the parameters of interest. The time taken to detect features, the time taken to match images, the number of identified feature points, and the total running time is recorded in this study. The quantitative results show that the ORB and SURF algorithms detect and match more features than other algorithms. Furthermore, they are computationally less expensive and robust compared to other algorithms. In addition, the robustness of ORB and SURF is quite high in terms of outliers, and the amount of time taken to match with the reference is also significantly less. However, the efficiency of SURF reduces against blur transformation. FAST is good in detecting corners but lacks efficiency under different transformations. Experiments show that when each algorithm is subjected to numerous alterations, it has its own set of advantages and limitations.