{"title":"SIFT in perception-based color space","authors":"Yan Cui, A. Pagani, D. Stricker","doi":"10.1109/ICIP.2010.5651165","DOIUrl":null,"url":null,"abstract":"Scale Invariant Feature Transform (SIFT) has been proven to be the most robust local invariant feature descriptor. However, SIFT is designed mainly for grayscale images. Many local features can be misclassified if their color information is ignored. Motivated by perceptual principles, this paper addresses a new color space, called perception-based color space, in which the associated metric approximates perceived distances and color displacements and captures illumination invariant relationship. Instead of using grayscale values to represent the input image, the proposed approach builds the SIFT descriptors in the new color space, resulting in a descriptor that is more robust than the standard SIFT with respect to color and illumination variations. The evaluation results support the potential of the proposed approach.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"11 10","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE International Conference on Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2010.5651165","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
Scale Invariant Feature Transform (SIFT) has been proven to be the most robust local invariant feature descriptor. However, SIFT is designed mainly for grayscale images. Many local features can be misclassified if their color information is ignored. Motivated by perceptual principles, this paper addresses a new color space, called perception-based color space, in which the associated metric approximates perceived distances and color displacements and captures illumination invariant relationship. Instead of using grayscale values to represent the input image, the proposed approach builds the SIFT descriptors in the new color space, resulting in a descriptor that is more robust than the standard SIFT with respect to color and illumination variations. The evaluation results support the potential of the proposed approach.