{"title":"Non user interaction content summarization","authors":"S. S. Thomas, Sumana Gupta, K. Venkatesh","doi":"10.1109/ICDSP.2014.6900672","DOIUrl":null,"url":null,"abstract":"An effective comprehension and filtering of video clip contents is a promising approach towards video summarization. Information about the movement patterns of detecting objects would best be concatenated into a single image. It contracts browsing time and reduces spatio-temporal redundancy, while perpetuating the nub of the clip content and the impression of motion. In this paper, we introduce a system for summarizing movements of multiple objects across a single camera. However, this is not a trivial task. There arise issues of trajectory representation without motion ambiguity and excessive occlusion. This paper addresses some of these concerns and above all presents an approach that helps the viewer to have a more automated summary of the general content of the clip. We address fully automated reference frame selection and frame removal for a productive video clip summary and handle many difficult examples involving interacting objects. We have evaluated our approach to different types of video clip.","PeriodicalId":301856,"journal":{"name":"2014 19th International Conference on Digital Signal Processing","volume":"83 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 19th International Conference on Digital Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSP.2014.6900672","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
An effective comprehension and filtering of video clip contents is a promising approach towards video summarization. Information about the movement patterns of detecting objects would best be concatenated into a single image. It contracts browsing time and reduces spatio-temporal redundancy, while perpetuating the nub of the clip content and the impression of motion. In this paper, we introduce a system for summarizing movements of multiple objects across a single camera. However, this is not a trivial task. There arise issues of trajectory representation without motion ambiguity and excessive occlusion. This paper addresses some of these concerns and above all presents an approach that helps the viewer to have a more automated summary of the general content of the clip. We address fully automated reference frame selection and frame removal for a productive video clip summary and handle many difficult examples involving interacting objects. We have evaluated our approach to different types of video clip.