{"title":"基于相似度和常识知识库的大域视频语义自动标注","authors":"Amjad AlTadmri, Amr Ahmed","doi":"10.1109/ICSIPA.2009.5478723","DOIUrl":null,"url":null,"abstract":"In this paper, we introduce a novel framework for automatic Semantic Video Annotation. As this framework detects possible events occurring in video clips, it forms the annotating base of a video search engine. To achieve this purpose, the system has to able to operate on uncontrolled wide-domain videos. Thus, all layers have to be based on generic features. The aim is to help bridge the “semantic gap“, which is the difference between the low-level visual features and the human's perception, by finding videos with similar visual events, then analyzing their free text annotation to find the best description for this new video using commonsense knowledgebases. Experiments were performed on wide-domain video clips from the TRECVID 2005 BBC rush standard database. Results from these experiments show promising integrity between those two layers in order to find expressing annotations for the input video. These results were evaluated based on retrieval performance.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases\",\"authors\":\"Amjad AlTadmri, Amr Ahmed\",\"doi\":\"10.1109/ICSIPA.2009.5478723\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we introduce a novel framework for automatic Semantic Video Annotation. As this framework detects possible events occurring in video clips, it forms the annotating base of a video search engine. To achieve this purpose, the system has to able to operate on uncontrolled wide-domain videos. Thus, all layers have to be based on generic features. The aim is to help bridge the “semantic gap“, which is the difference between the low-level visual features and the human's perception, by finding videos with similar visual events, then analyzing their free text annotation to find the best description for this new video using commonsense knowledgebases. Experiments were performed on wide-domain video clips from the TRECVID 2005 BBC rush standard database. Results from these experiments show promising integrity between those two layers in order to find expressing annotations for the input video. These results were evaluated based on retrieval performance.\",\"PeriodicalId\":400165,\"journal\":{\"name\":\"2009 IEEE International Conference on Signal and Image Processing Applications\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-11-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 IEEE International Conference on Signal and Image Processing Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSIPA.2009.5478723\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE International Conference on Signal and Image Processing Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSIPA.2009.5478723","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18
摘要
本文提出了一种新的视频语义自动标注框架。当该框架检测视频片段中可能发生的事件时,它构成了视频搜索引擎的注释基础。为了达到这一目的,系统必须能够在不受控制的宽域视频上运行。因此,所有层都必须基于通用特征。其目的是通过寻找具有相似视觉事件的视频,然后分析其自由文本注释,使用常识知识库为该新视频找到最佳描述,从而帮助弥合“语义差距”,即低级视觉特征与人类感知之间的差异。实验在TRECVID 2005 BBC rush标准数据库的宽域视频片段上进行。实验结果表明,这两层之间具有良好的完整性,可以为输入视频找到表达性的注释。这些结果是基于检索性能进行评估的。
Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases
In this paper, we introduce a novel framework for automatic Semantic Video Annotation. As this framework detects possible events occurring in video clips, it forms the annotating base of a video search engine. To achieve this purpose, the system has to able to operate on uncontrolled wide-domain videos. Thus, all layers have to be based on generic features. The aim is to help bridge the “semantic gap“, which is the difference between the low-level visual features and the human's perception, by finding videos with similar visual events, then analyzing their free text annotation to find the best description for this new video using commonsense knowledgebases. Experiments were performed on wide-domain video clips from the TRECVID 2005 BBC rush standard database. Results from these experiments show promising integrity between those two layers in order to find expressing annotations for the input video. These results were evaluated based on retrieval performance.