远距离空间上下文自动图像标注

Donglin Cao, Dazhen Lin, Jiansong Yu
{"title":"远距离空间上下文自动图像标注","authors":"Donglin Cao, Dazhen Lin, Jiansong Yu","doi":"10.1109/UKCI.2014.6930181","DOIUrl":null,"url":null,"abstract":"Because of high computational complexity, a long distance spatial-context based automatic image annotation is hard to achieve. Some state of art approaches in image processing, such as 2D-HMM, only considering short distance spatial-context (two neighbors) to reduce the computational complexity. However, these approaches cannot describe long distance semantic spatial-context in image. Therefore, in this paper, we propose a two-step Long Distance Spatial-context Model (LDSM) to solve that problem. First, because of high computational complexity in 2D spatial-context, we transform a 2D spatial-context into a 1D sequence-context. Second, we use conditional random fields to model the 1D sequence-context. Our experiments show that LDSM models the semantic relation between annotated object and background, and experiment results outperform the classical automatic image annotation approach (SVM).","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automatic image annotation with long distance spatial-context\",\"authors\":\"Donglin Cao, Dazhen Lin, Jiansong Yu\",\"doi\":\"10.1109/UKCI.2014.6930181\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Because of high computational complexity, a long distance spatial-context based automatic image annotation is hard to achieve. Some state of art approaches in image processing, such as 2D-HMM, only considering short distance spatial-context (two neighbors) to reduce the computational complexity. However, these approaches cannot describe long distance semantic spatial-context in image. Therefore, in this paper, we propose a two-step Long Distance Spatial-context Model (LDSM) to solve that problem. First, because of high computational complexity in 2D spatial-context, we transform a 2D spatial-context into a 1D sequence-context. Second, we use conditional random fields to model the 1D sequence-context. Our experiments show that LDSM models the semantic relation between annotated object and background, and experiment results outperform the classical automatic image annotation approach (SVM).\",\"PeriodicalId\":315044,\"journal\":{\"name\":\"2014 14th UK Workshop on Computational Intelligence (UKCI)\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-10-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 14th UK Workshop on Computational Intelligence (UKCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/UKCI.2014.6930181\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 14th UK Workshop on Computational Intelligence (UKCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UKCI.2014.6930181","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于计算复杂度高,基于空间上下文的远距离图像自动标注难以实现。在图像处理中,一些最先进的方法,如2D-HMM,只考虑短距离的空间-上下文(两个相邻)来降低计算复杂度。然而,这些方法无法描述图像中的长距离语义空间语境。为此,本文提出了一种两步长距离空间-背景模型(LDSM)来解决这一问题。首先,由于二维空间上下文的计算复杂度高,我们将二维空间上下文转换为一维序列上下文。其次,我们使用条件随机场来建模一维序列上下文。实验结果表明,LDSM能够对标注对象和背景之间的语义关系进行建模,实验结果优于经典的自动图像标注方法(SVM)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automatic image annotation with long distance spatial-context
Because of high computational complexity, a long distance spatial-context based automatic image annotation is hard to achieve. Some state of art approaches in image processing, such as 2D-HMM, only considering short distance spatial-context (two neighbors) to reduce the computational complexity. However, these approaches cannot describe long distance semantic spatial-context in image. Therefore, in this paper, we propose a two-step Long Distance Spatial-context Model (LDSM) to solve that problem. First, because of high computational complexity in 2D spatial-context, we transform a 2D spatial-context into a 1D sequence-context. Second, we use conditional random fields to model the 1D sequence-context. Our experiments show that LDSM models the semantic relation between annotated object and background, and experiment results outperform the classical automatic image annotation approach (SVM).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信