M. Oujaoura, R. El Ayachi, B. Minaoui, M. Fakir, O. Bencharef
{"title":"Grouping K-Means Adjacent Regions for Semantic Image Annotation Using Bayesian Networks","authors":"M. Oujaoura, R. El Ayachi, B. Minaoui, M. Fakir, O. Bencharef","doi":"10.1109/CGIV.2016.54","DOIUrl":null,"url":null,"abstract":"To perform a semantic search on a large dataset of images, we need to be able to transform the visual content of images (colors, textures, shapes) into semantic information. This transformation, called image annotation, assigns a caption or keywords to the visual content in a digital image. In this paper we try to resolve partially the region homogeneity problem in image annotation, we propose an approach to annotate image based on grouping adjacent regions, we use the k-means algorithm as the segmentation algorithm while the texture and GIST descriptors are used as features to represent image content. The Bayesian networks were been used as classifiers in order to find and allocate the appropriate keywords to this content. The experimental results were been obtained from the ETH-80 image database.","PeriodicalId":351561,"journal":{"name":"2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CGIV.2016.54","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
To perform a semantic search on a large dataset of images, we need to be able to transform the visual content of images (colors, textures, shapes) into semantic information. This transformation, called image annotation, assigns a caption or keywords to the visual content in a digital image. In this paper we try to resolve partially the region homogeneity problem in image annotation, we propose an approach to annotate image based on grouping adjacent regions, we use the k-means algorithm as the segmentation algorithm while the texture and GIST descriptors are used as features to represent image content. The Bayesian networks were been used as classifiers in order to find and allocate the appropriate keywords to this content. The experimental results were been obtained from the ETH-80 image database.