{"title":"面向视频摘要的语义视听分析","authors":"Junyong You, M. Hannuksela, M. Gabbouj","doi":"10.1109/EURCON.2009.5167816","DOIUrl":null,"url":null,"abstract":"This paper proposes a semantic audiovisual analysis approach for video summarization. The sequence to be analyzed is first segmented into scenes according to audio similarity. Some global clues such as loudness, the ratio of unrelated shots, and the affective relationship between the scenes and the whole sequence are employed to compute the semantic scene importance. The shots in each scene are grouped based on the luminance histograms, and the semantic shot importance is then calculated using selected audio and video features. Subsequently, key frames are extracted according to the semantic frame importance computed based on certain visual features, such as attention region and motion information. This approach is effective to generate a representative video summary whilst avoiding some disadvantages of the traditional video summarization methods. Experimental results demonstrate promising performance of the proposed approach.","PeriodicalId":256285,"journal":{"name":"IEEE EUROCON 2009","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Semantic audiovisual analysis for video summarization\",\"authors\":\"Junyong You, M. Hannuksela, M. Gabbouj\",\"doi\":\"10.1109/EURCON.2009.5167816\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a semantic audiovisual analysis approach for video summarization. The sequence to be analyzed is first segmented into scenes according to audio similarity. Some global clues such as loudness, the ratio of unrelated shots, and the affective relationship between the scenes and the whole sequence are employed to compute the semantic scene importance. The shots in each scene are grouped based on the luminance histograms, and the semantic shot importance is then calculated using selected audio and video features. Subsequently, key frames are extracted according to the semantic frame importance computed based on certain visual features, such as attention region and motion information. This approach is effective to generate a representative video summary whilst avoiding some disadvantages of the traditional video summarization methods. Experimental results demonstrate promising performance of the proposed approach.\",\"PeriodicalId\":256285,\"journal\":{\"name\":\"IEEE EUROCON 2009\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-05-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE EUROCON 2009\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/EURCON.2009.5167816\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE EUROCON 2009","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EURCON.2009.5167816","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Semantic audiovisual analysis for video summarization
This paper proposes a semantic audiovisual analysis approach for video summarization. The sequence to be analyzed is first segmented into scenes according to audio similarity. Some global clues such as loudness, the ratio of unrelated shots, and the affective relationship between the scenes and the whole sequence are employed to compute the semantic scene importance. The shots in each scene are grouped based on the luminance histograms, and the semantic shot importance is then calculated using selected audio and video features. Subsequently, key frames are extracted according to the semantic frame importance computed based on certain visual features, such as attention region and motion information. This approach is effective to generate a representative video summary whilst avoiding some disadvantages of the traditional video summarization methods. Experimental results demonstrate promising performance of the proposed approach.