基于贝叶斯网络的k -均值分组相邻区域语义图像标注

M. Oujaoura, R. El Ayachi, B. Minaoui, M. Fakir, O. Bencharef
{"title":"基于贝叶斯网络的k -均值分组相邻区域语义图像标注","authors":"M. Oujaoura, R. El Ayachi, B. Minaoui, M. Fakir, O. Bencharef","doi":"10.1109/CGIV.2016.54","DOIUrl":null,"url":null,"abstract":"To perform a semantic search on a large dataset of images, we need to be able to transform the visual content of images (colors, textures, shapes) into semantic information. This transformation, called image annotation, assigns a caption or keywords to the visual content in a digital image. In this paper we try to resolve partially the region homogeneity problem in image annotation, we propose an approach to annotate image based on grouping adjacent regions, we use the k-means algorithm as the segmentation algorithm while the texture and GIST descriptors are used as features to represent image content. The Bayesian networks were been used as classifiers in order to find and allocate the appropriate keywords to this content. The experimental results were been obtained from the ETH-80 image database.","PeriodicalId":351561,"journal":{"name":"2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Grouping K-Means Adjacent Regions for Semantic Image Annotation Using Bayesian Networks\",\"authors\":\"M. Oujaoura, R. El Ayachi, B. Minaoui, M. Fakir, O. Bencharef\",\"doi\":\"10.1109/CGIV.2016.54\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To perform a semantic search on a large dataset of images, we need to be able to transform the visual content of images (colors, textures, shapes) into semantic information. This transformation, called image annotation, assigns a caption or keywords to the visual content in a digital image. In this paper we try to resolve partially the region homogeneity problem in image annotation, we propose an approach to annotate image based on grouping adjacent regions, we use the k-means algorithm as the segmentation algorithm while the texture and GIST descriptors are used as features to represent image content. The Bayesian networks were been used as classifiers in order to find and allocate the appropriate keywords to this content. The experimental results were been obtained from the ETH-80 image database.\",\"PeriodicalId\":351561,\"journal\":{\"name\":\"2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CGIV.2016.54\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CGIV.2016.54","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

为了在大型图像数据集上执行语义搜索,我们需要能够将图像的视觉内容(颜色、纹理、形状)转换为语义信息。这种转换称为图像注释,为数字图像中的视觉内容分配标题或关键字。本文试图部分解决图像标注中的区域同质性问题,提出了一种基于相邻区域分组的图像标注方法,使用k-means算法作为分割算法,使用纹理和GIST描述符作为特征来表示图像内容。贝叶斯网络被用作分类器,以便为该内容找到并分配适当的关键字。实验结果从ETH-80图像数据库中获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Grouping K-Means Adjacent Regions for Semantic Image Annotation Using Bayesian Networks
To perform a semantic search on a large dataset of images, we need to be able to transform the visual content of images (colors, textures, shapes) into semantic information. This transformation, called image annotation, assigns a caption or keywords to the visual content in a digital image. In this paper we try to resolve partially the region homogeneity problem in image annotation, we propose an approach to annotate image based on grouping adjacent regions, we use the k-means algorithm as the segmentation algorithm while the texture and GIST descriptors are used as features to represent image content. The Bayesian networks were been used as classifiers in order to find and allocate the appropriate keywords to this content. The experimental results were been obtained from the ETH-80 image database.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信