{"title":"Making Patch Based Descriptors More Distinguishable and Robust for Image Copy Retrieval","authors":"Junaid Baber, Erum Fida, Maheen Bakhtyar, Humaira Ashraf","doi":"10.1109/DICTA.2015.7371281","DOIUrl":null,"url":null,"abstract":"Images have become one of the main sources for the information, learning and entertainment; but due to the advancement and progress in multimedia technologies, millions of images are shared daily on Internet which can be easily duplicated and redistributed. Distribution of these duplicated and transformed images causes a lot of problems and challenges such as piracy, redundancy, and content-based image indexing and retrieval. To address these problems, copy detection systems based on local features are widely used. Initially, keypoints are detected and represented by some robust descriptors. The descriptors are computed over the affine patches around the keypoints, these patches should be repeatable under photometric and geometric transformations. However, there exists two main challenges with patch based descriptors, (1) the affine patch over the keypoint can produce similar descriptors under entirely different scene or context which causes \"ambiguity'' (in-distinctiveness), and (2) the descriptors are not enough \"robust'' under image noise. In this paper, we present a framework that makes descriptor more distinguishable and robust by influencing them with the texture or gradients in vicinity by computing them on different and multiple scales. To evaluate the robustness of descriptors, an experiment on keypoint matching under severe transformations is conducted. On average the robustness of SIFT descriptor is increased up-to 12.5%, and robustness of CSLBP descriptor is increased up-to 31%. The distinctiveness is evaluated on image copy retrieval experiment where copies of images are retrieved under challenging transformations. On average, the performance of SIFT to retrieve all copies is increased up-to 27.27%, and the performance of CSLBP to retrieve all copies is increased up-to 27.02%.","PeriodicalId":214897,"journal":{"name":"2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2015.7371281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Images have become one of the main sources for the information, learning and entertainment; but due to the advancement and progress in multimedia technologies, millions of images are shared daily on Internet which can be easily duplicated and redistributed. Distribution of these duplicated and transformed images causes a lot of problems and challenges such as piracy, redundancy, and content-based image indexing and retrieval. To address these problems, copy detection systems based on local features are widely used. Initially, keypoints are detected and represented by some robust descriptors. The descriptors are computed over the affine patches around the keypoints, these patches should be repeatable under photometric and geometric transformations. However, there exists two main challenges with patch based descriptors, (1) the affine patch over the keypoint can produce similar descriptors under entirely different scene or context which causes "ambiguity'' (in-distinctiveness), and (2) the descriptors are not enough "robust'' under image noise. In this paper, we present a framework that makes descriptor more distinguishable and robust by influencing them with the texture or gradients in vicinity by computing them on different and multiple scales. To evaluate the robustness of descriptors, an experiment on keypoint matching under severe transformations is conducted. On average the robustness of SIFT descriptor is increased up-to 12.5%, and robustness of CSLBP descriptor is increased up-to 31%. The distinctiveness is evaluated on image copy retrieval experiment where copies of images are retrieved under challenging transformations. On average, the performance of SIFT to retrieve all copies is increased up-to 27.27%, and the performance of CSLBP to retrieve all copies is increased up-to 27.02%.