{"title":"基于内容-运动对应的超帧分割用于社交视频摘要","authors":"Tao Zhuo, Peng Zhang, Kangli Chen, Yanning Zhang","doi":"10.1109/ACII.2015.7344674","DOIUrl":null,"url":null,"abstract":"The goal of video summarization is to turn large volume of video data into a compact visual summary that can be easily interpreted by users in a while. Existing summarization strategies employed the point based feature correspondence for the superframe segmentation. Unfortunately, the information carried by those sparse points is far from sufficiency and stability to describe the change of interesting regions of each frame. Therefore, in order to overcome the limitations of point feature, we propose a region correspondence based superframe segmentation to achieve more effective video summarization. Instead of utilizing the motion of feature points, we calculate the similarity of content-motion to obtain the strength of change between the consecutive frames. With the help of circulant structure kernel, the proposed method is able to perform more accurate motion estimation efficiently. Experimental testing on the videos from benchmark database has demonstrate the effectiveness of the proposed method.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"24 1","pages":"857-862"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Superframe segmentation based on content-motion correspondence for social video summarization\",\"authors\":\"Tao Zhuo, Peng Zhang, Kangli Chen, Yanning Zhang\",\"doi\":\"10.1109/ACII.2015.7344674\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The goal of video summarization is to turn large volume of video data into a compact visual summary that can be easily interpreted by users in a while. Existing summarization strategies employed the point based feature correspondence for the superframe segmentation. Unfortunately, the information carried by those sparse points is far from sufficiency and stability to describe the change of interesting regions of each frame. Therefore, in order to overcome the limitations of point feature, we propose a region correspondence based superframe segmentation to achieve more effective video summarization. Instead of utilizing the motion of feature points, we calculate the similarity of content-motion to obtain the strength of change between the consecutive frames. With the help of circulant structure kernel, the proposed method is able to perform more accurate motion estimation efficiently. Experimental testing on the videos from benchmark database has demonstrate the effectiveness of the proposed method.\",\"PeriodicalId\":6863,\"journal\":{\"name\":\"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)\",\"volume\":\"24 1\",\"pages\":\"857-862\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-09-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACII.2015.7344674\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACII.2015.7344674","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Superframe segmentation based on content-motion correspondence for social video summarization
The goal of video summarization is to turn large volume of video data into a compact visual summary that can be easily interpreted by users in a while. Existing summarization strategies employed the point based feature correspondence for the superframe segmentation. Unfortunately, the information carried by those sparse points is far from sufficiency and stability to describe the change of interesting regions of each frame. Therefore, in order to overcome the limitations of point feature, we propose a region correspondence based superframe segmentation to achieve more effective video summarization. Instead of utilizing the motion of feature points, we calculate the similarity of content-motion to obtain the strength of change between the consecutive frames. With the help of circulant structure kernel, the proposed method is able to perform more accurate motion estimation efficiently. Experimental testing on the videos from benchmark database has demonstrate the effectiveness of the proposed method.