使用对象中心网格建模图像上下文

S. N. Parizi, I. Laptev, Alireza Tavakoli Targhi
{"title":"使用对象中心网格建模图像上下文","authors":"S. N. Parizi, I. Laptev, Alireza Tavakoli Targhi","doi":"10.1109/DICTA.2009.80","DOIUrl":null,"url":null,"abstract":"Context plays a valuable role in any image understanding task confirmed by numerous studies which have shown the importance of contextual information in computer vision tasks, like object detection, scene classification and image retrieval. Studies of human perception on the tasks of scene classification and visual search have shown that human visual system makes extensive use of contextual information as postprocessing in order to index objects. Several recent computer vision approaches use contextual information to improve object recognition performance. They mainly use global information of the whole image by dividing the image into several predefined subregions, so called fixed grid. In this paper we propose an alternative approach to retrieval of contextual information, by customizing the location of the grid based on salient objects in the image. We claim this approach to result in more informative contextual features compared to the fixed grid based strategy. To compare our results with the most relevant and recent papers, we use PASCAL 2007 data set. Our experimental results show an improvement in terms of Mean Average Precision.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Modeling Image Context Using Object Centered Grid\",\"authors\":\"S. N. Parizi, I. Laptev, Alireza Tavakoli Targhi\",\"doi\":\"10.1109/DICTA.2009.80\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Context plays a valuable role in any image understanding task confirmed by numerous studies which have shown the importance of contextual information in computer vision tasks, like object detection, scene classification and image retrieval. Studies of human perception on the tasks of scene classification and visual search have shown that human visual system makes extensive use of contextual information as postprocessing in order to index objects. Several recent computer vision approaches use contextual information to improve object recognition performance. They mainly use global information of the whole image by dividing the image into several predefined subregions, so called fixed grid. In this paper we propose an alternative approach to retrieval of contextual information, by customizing the location of the grid based on salient objects in the image. We claim this approach to result in more informative contextual features compared to the fixed grid based strategy. To compare our results with the most relevant and recent papers, we use PASCAL 2007 data set. Our experimental results show an improvement in terms of Mean Average Precision.\",\"PeriodicalId\":277395,\"journal\":{\"name\":\"2009 Digital Image Computing: Techniques and Applications\",\"volume\":\"68 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 Digital Image Computing: Techniques and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DICTA.2009.80\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 Digital Image Computing: Techniques and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2009.80","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

上下文在任何图像理解任务中都起着重要的作用,许多研究都证实了上下文信息在计算机视觉任务中的重要性,如物体检测、场景分类和图像检索。人类对场景分类和视觉搜索任务的感知研究表明,人类视觉系统广泛使用上下文信息作为后处理来索引对象。最近的几种计算机视觉方法使用上下文信息来提高对象识别性能。它们主要利用整个图像的全局信息,将图像划分为几个预定义的子区域,即固定网格。在本文中,我们提出了一种替代方法来检索上下文信息,通过自定义网格的位置基于图像中的显著对象。我们声称,与基于固定网格的策略相比,这种方法可以产生更多信息丰富的上下文特征。为了将我们的结果与最相关和最新的论文进行比较,我们使用了PASCAL 2007数据集。实验结果表明,该方法在平均精度方面有所提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Modeling Image Context Using Object Centered Grid
Context plays a valuable role in any image understanding task confirmed by numerous studies which have shown the importance of contextual information in computer vision tasks, like object detection, scene classification and image retrieval. Studies of human perception on the tasks of scene classification and visual search have shown that human visual system makes extensive use of contextual information as postprocessing in order to index objects. Several recent computer vision approaches use contextual information to improve object recognition performance. They mainly use global information of the whole image by dividing the image into several predefined subregions, so called fixed grid. In this paper we propose an alternative approach to retrieval of contextual information, by customizing the location of the grid based on salient objects in the image. We claim this approach to result in more informative contextual features compared to the fixed grid based strategy. To compare our results with the most relevant and recent papers, we use PASCAL 2007 data set. Our experimental results show an improvement in terms of Mean Average Precision.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信