{"title":"基于有向边缘大小模式的视频检索","authors":"K. R. Holla, B. H. Shekar","doi":"10.1145/2983402.2983433","DOIUrl":null,"url":null,"abstract":"In this work, a video retrieval system is proposed based on POEM (Patterns of Oriented Edge Magnitudes) descriptor. In the first stage, the input video is partitioned into shots based on Gabor moments and keyframes are selected from each shot based on Temporally Maximum Occurrence Frame (TMOF). In the next stage, the POEM descriptor is computed from each keyframe for robust image/frame representation. Given a query frame, the descriptor is obtained from it in a similar manner, and this descriptor is compared with the descriptors of the video keyframes using nearest neighbour matching technique to find the matching keyframe. We have conducted experiments on the TRECVID video segments to exhibit the superiority of the proposed approach for video retrieval applications.","PeriodicalId":283626,"journal":{"name":"Proceedings of the Third International Symposium on Computer Vision and the Internet","volume":"83 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Video Retrieval based on Patterns of Oriented Edge Magnitude\",\"authors\":\"K. R. Holla, B. H. Shekar\",\"doi\":\"10.1145/2983402.2983433\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this work, a video retrieval system is proposed based on POEM (Patterns of Oriented Edge Magnitudes) descriptor. In the first stage, the input video is partitioned into shots based on Gabor moments and keyframes are selected from each shot based on Temporally Maximum Occurrence Frame (TMOF). In the next stage, the POEM descriptor is computed from each keyframe for robust image/frame representation. Given a query frame, the descriptor is obtained from it in a similar manner, and this descriptor is compared with the descriptors of the video keyframes using nearest neighbour matching technique to find the matching keyframe. We have conducted experiments on the TRECVID video segments to exhibit the superiority of the proposed approach for video retrieval applications.\",\"PeriodicalId\":283626,\"journal\":{\"name\":\"Proceedings of the Third International Symposium on Computer Vision and the Internet\",\"volume\":\"83 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Third International Symposium on Computer Vision and the Internet\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2983402.2983433\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third International Symposium on Computer Vision and the Internet","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2983402.2983433","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
本文提出了一种基于POEM (Patterns of Oriented Edge magnitude)描述符的视频检索系统。在第一阶段,输入视频根据Gabor矩分割成多个镜头,并根据时间最大发生帧(TMOF)从每个镜头中选择关键帧。在下一阶段,从每个关键帧计算POEM描述符,以实现稳健的图像/帧表示。给定一个查询帧,以类似的方式从中获得描述符,并使用最近邻匹配技术将该描述符与视频关键帧的描述符进行比较,以找到匹配的关键帧。我们在TRECVID视频片段上进行了实验,证明了所提出的方法在视频检索应用中的优越性。
Video Retrieval based on Patterns of Oriented Edge Magnitude
In this work, a video retrieval system is proposed based on POEM (Patterns of Oriented Edge Magnitudes) descriptor. In the first stage, the input video is partitioned into shots based on Gabor moments and keyframes are selected from each shot based on Temporally Maximum Occurrence Frame (TMOF). In the next stage, the POEM descriptor is computed from each keyframe for robust image/frame representation. Given a query frame, the descriptor is obtained from it in a similar manner, and this descriptor is compared with the descriptors of the video keyframes using nearest neighbour matching technique to find the matching keyframe. We have conducted experiments on the TRECVID video segments to exhibit the superiority of the proposed approach for video retrieval applications.