{"title":"Cross-Media Annotation Based on Semantic Network","authors":"Zeng Cheng, Li Yaqin","doi":"10.1109/WSCS.2008.8","DOIUrl":null,"url":null,"abstract":"The traditional information annotation technology is usually based on text description methods. With the ever increasing multimedia resources and the rapid progress of cross-media technology, the annotation between different modalities of media information is becoming possible. This paper presents a study of a cross-media annotation technology. In this technology, some multimedia examples expressed by structured information segments are submitted to a cross-media meta-search engine, to which the results in return, as the examples of other multimedia, are submitted again. The correlation between those examples and results is calculated per-time so that a cross-media semantic network is constructed by the iterative process. Thus the content of a Web page could be explained by different audio-visual perceptive cross-media information using the semantic network. This technology is proved to be feasible and effectual in the archetype system CMA (Cross Media Annotation).","PeriodicalId":378383,"journal":{"name":"IEEE International Workshop on Semantic Computing and Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Workshop on Semantic Computing and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WSCS.2008.8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The traditional information annotation technology is usually based on text description methods. With the ever increasing multimedia resources and the rapid progress of cross-media technology, the annotation between different modalities of media information is becoming possible. This paper presents a study of a cross-media annotation technology. In this technology, some multimedia examples expressed by structured information segments are submitted to a cross-media meta-search engine, to which the results in return, as the examples of other multimedia, are submitted again. The correlation between those examples and results is calculated per-time so that a cross-media semantic network is constructed by the iterative process. Thus the content of a Web page could be explained by different audio-visual perceptive cross-media information using the semantic network. This technology is proved to be feasible and effectual in the archetype system CMA (Cross Media Annotation).