Cross-Media Annotation Based on Semantic Network

Zeng Cheng, Li Yaqin
{"title":"Cross-Media Annotation Based on Semantic Network","authors":"Zeng Cheng, Li Yaqin","doi":"10.1109/WSCS.2008.8","DOIUrl":null,"url":null,"abstract":"The traditional information annotation technology is usually based on text description methods. With the ever increasing multimedia resources and the rapid progress of cross-media technology, the annotation between different modalities of media information is becoming possible. This paper presents a study of a cross-media annotation technology. In this technology, some multimedia examples expressed by structured information segments are submitted to a cross-media meta-search engine, to which the results in return, as the examples of other multimedia, are submitted again. The correlation between those examples and results is calculated per-time so that a cross-media semantic network is constructed by the iterative process. Thus the content of a Web page could be explained by different audio-visual perceptive cross-media information using the semantic network. This technology is proved to be feasible and effectual in the archetype system CMA (Cross Media Annotation).","PeriodicalId":378383,"journal":{"name":"IEEE International Workshop on Semantic Computing and Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Workshop on Semantic Computing and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WSCS.2008.8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The traditional information annotation technology is usually based on text description methods. With the ever increasing multimedia resources and the rapid progress of cross-media technology, the annotation between different modalities of media information is becoming possible. This paper presents a study of a cross-media annotation technology. In this technology, some multimedia examples expressed by structured information segments are submitted to a cross-media meta-search engine, to which the results in return, as the examples of other multimedia, are submitted again. The correlation between those examples and results is calculated per-time so that a cross-media semantic network is constructed by the iterative process. Thus the content of a Web page could be explained by different audio-visual perceptive cross-media information using the semantic network. This technology is proved to be feasible and effectual in the archetype system CMA (Cross Media Annotation).
基于语义网络的跨媒体标注
传统的信息标注技术通常是基于文本描述方法的。随着多媒体资源的不断增加和跨媒体技术的飞速发展,不同形式的媒体信息之间的标注成为可能。本文研究了一种跨媒体标注技术。在该技术中,将一些以结构化信息段表示的多媒体示例提交给跨媒体元搜索引擎,作为其他多媒体示例的返回结果再次提交给该引擎。每次计算这些实例与结果之间的相关性,通过迭代过程构建跨媒体语义网络。因此,一个网页的内容可以用不同的视听感知跨媒体信息利用语义网络来解释。该技术在原型系统CMA(跨媒体标注)中被证明是可行和有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信