{"title":"A Bayesian image annotation framework integrating search and context","authors":"Rui Zhang, Kui Wu, Kim-Hui Yap, L. Guan","doi":"10.1109/MMSP.2010.5662072","DOIUrl":null,"url":null,"abstract":"Conventional approaches to image annotation tackle the problem based on the low-level visual information. Considering the importance of the information on the constrained interaction among the objects in a real world scene, contextual information has been utilized to recognize scene and object categories. In this paper, we propose a Bayesian approach to region-based image annotation, which integrates the content-based search and context into a unified framework. The content-based search selects representative keywords by matching an unlabeled image with the labeled ones followed by a weighted keyword ranking, which are in turn used by the context model to calculate the a prior probabilities of the object categories. Finally, a Bayesian framework integrates the a priori probabilities and the visual properties of image regions. The framework was evaluated using two databases and several performance measures, which demonstrated its superiority to both visual content-based and context-based approaches.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE International Workshop on Multimedia Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2010.5662072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Conventional approaches to image annotation tackle the problem based on the low-level visual information. Considering the importance of the information on the constrained interaction among the objects in a real world scene, contextual information has been utilized to recognize scene and object categories. In this paper, we propose a Bayesian approach to region-based image annotation, which integrates the content-based search and context into a unified framework. The content-based search selects representative keywords by matching an unlabeled image with the labeled ones followed by a weighted keyword ranking, which are in turn used by the context model to calculate the a prior probabilities of the object categories. Finally, a Bayesian framework integrates the a priori probabilities and the visual properties of image regions. The framework was evaluated using two databases and several performance measures, which demonstrated its superiority to both visual content-based and context-based approaches.