{"title":"Visualizing Bag-of-Features Image Categorization Using Anchored Maps","authors":"Gao Yi, Hsiang-Yun Wu, Kazuo Misue, Kazuyo Mizuno, Shigeo Takahashi","doi":"10.1145/2636240.2636858","DOIUrl":null,"url":null,"abstract":"The bag-of-features models is one of the most popular and promising approaches for extracting the underlying semantics from image databases. However, the associated image categorization based on machine learning techniques may not convince us of its validity since we cannot visually verify how the images have been classified in the high-dimensional image feature space. This paper aims at visually rearrange the images in the projected feature space by taking advantage of a set of representative features called visual words obtained using the bag-of-features model. Our main idea is to associate each image with a specific number of visual words to compose a bipartite graph, and then lay out the overall set of images using anchored map representation in which the ordering of anchor nodes is optimized through a genetic algorithm. For handling relatively large image datasets, we adaptively merge a pair of most similar images one by one to conduct the hierarchical clustering through the similarity measure based on the weighted Jaccard coefficient. Voronoi partitioning has been also incorporated into our approach so that we can visually identify the image categorization based on support vector machine. Experimental results are finally presented to demonstrate that our visualization framework can effectively elucidate the underlying relationships between images and visual words through the anchored map representation.","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Symposiu on Visual Information Communication and Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2636240.2636858","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The bag-of-features models is one of the most popular and promising approaches for extracting the underlying semantics from image databases. However, the associated image categorization based on machine learning techniques may not convince us of its validity since we cannot visually verify how the images have been classified in the high-dimensional image feature space. This paper aims at visually rearrange the images in the projected feature space by taking advantage of a set of representative features called visual words obtained using the bag-of-features model. Our main idea is to associate each image with a specific number of visual words to compose a bipartite graph, and then lay out the overall set of images using anchored map representation in which the ordering of anchor nodes is optimized through a genetic algorithm. For handling relatively large image datasets, we adaptively merge a pair of most similar images one by one to conduct the hierarchical clustering through the similarity measure based on the weighted Jaccard coefficient. Voronoi partitioning has been also incorporated into our approach so that we can visually identify the image categorization based on support vector machine. Experimental results are finally presented to demonstrate that our visualization framework can effectively elucidate the underlying relationships between images and visual words through the anchored map representation.