Visual object recognition using DAISY descriptor

Chao Zhu, Charles-Edmond Bichot, Liming Chen
{"title":"Visual object recognition using DAISY descriptor","authors":"Chao Zhu, Charles-Edmond Bichot, Liming Chen","doi":"10.1109/ICME.2011.6011957","DOIUrl":null,"url":null,"abstract":"Visual content description is a key issue for the task of machine-based visual object categorization (VOC). A good visual descriptor should be both discriminative enough and computationally efficient while possessing some properties of robustness to viewpoint changes and lighting condition variations. The recent literature has featured local image descriptors, e.g. SIFT, as the main trend in VOC. However, it is well known that SIFT is computationally expensive, especially when the number of objects/concepts and learning data increase significantly. In this paper, we investigate the DAISY, which is a new fast local descriptor introduced for wide baseline matching problem, in the context of VOC. We carefully evaluate and compare the DAISY descriptor with SIFT both in terms of recognition accuracy and computation complexity on two standard image benchmarks - Caltech 101 and PASCAL VOC 2007. The experimental results show that DAISY outperforms the state-of-the-art SIFT while using shorter descriptor length and operating 3 times faster. When displaying a similar recognition accuracy to SIFT, DAISY can operate 12 times faster.","PeriodicalId":433997,"journal":{"name":"2011 IEEE International Conference on Multimedia and Expo","volume":"17 5","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE International Conference on Multimedia and Expo","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME.2011.6011957","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27

Abstract

Visual content description is a key issue for the task of machine-based visual object categorization (VOC). A good visual descriptor should be both discriminative enough and computationally efficient while possessing some properties of robustness to viewpoint changes and lighting condition variations. The recent literature has featured local image descriptors, e.g. SIFT, as the main trend in VOC. However, it is well known that SIFT is computationally expensive, especially when the number of objects/concepts and learning data increase significantly. In this paper, we investigate the DAISY, which is a new fast local descriptor introduced for wide baseline matching problem, in the context of VOC. We carefully evaluate and compare the DAISY descriptor with SIFT both in terms of recognition accuracy and computation complexity on two standard image benchmarks - Caltech 101 and PASCAL VOC 2007. The experimental results show that DAISY outperforms the state-of-the-art SIFT while using shorter descriptor length and operating 3 times faster. When displaying a similar recognition accuracy to SIFT, DAISY can operate 12 times faster.
使用DAISY描述符的视觉对象识别
视觉内容描述是基于机器的视觉对象分类(VOC)的关键问题。一个好的视觉描述符应该具有足够的判别性和计算效率,同时对视点变化和光照条件变化具有一定的鲁棒性。最近的文献将局部图像描述符(如SIFT)作为VOC的主要趋势。然而,众所周知,SIFT在计算上是昂贵的,特别是当对象/概念和学习数据的数量显著增加时。本文研究了一种新的快速局部描述符DAISY,它是在VOC背景下为宽基线匹配问题引入的。我们在两个标准图像基准(Caltech 101和PASCAL VOC 2007)上仔细评估并比较了DAISY描述符和SIFT的识别精度和计算复杂度。实验结果表明,DAISY使用更短的描述子长度,运行速度提高3倍,优于最先进的SIFT。当显示与SIFT相似的识别精度时,DAISY的运行速度可以提高12倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信