M. Silva, W. Ramos, Alan C. Neves, Edson Roteia Araujo Junior, M. Campos, E. R. Nascimento
{"title":"Fast-Forward Methods for Egocentric Videos: A Review","authors":"M. Silva, W. Ramos, Alan C. Neves, Edson Roteia Araujo Junior, M. Campos, E. R. Nascimento","doi":"10.1109/SIBGRAPI-T.2019.00009","DOIUrl":null,"url":null,"abstract":"The emergence of low-cost, high-quality personal wearable cameras combined with a large and increasing storage capacity of video-sharing websites have evoked a growing interest in first-person videos. A First-Person Video is usually composed of monotonous long-running unedited streams captured by a device attached to the user body, which makes it visually unpleasant and tedious to watch. Thus, there is a rise in the need to provide quick access to the information therein. In the last few years, a popular approach to retrieve the information from videos is to produce a short version of the input video by creating a video summary; however, this approach disrupts the temporal context of the recording. Fast-Forward is another approach that creates a shorter version of the video preserving the video context by increasing its playback speed. Although Fast-Forward methods keep the recording story, they do not consider the semantic load of the input video. The Semantic Fast-Forward approach creates a shorter version of First-Person Videos dealing with both video context and emphasis of the relevant portions to keep the semantic load of the input video. In this paper, we present a review of the representative methods in both fast-forward and semantic fast-forward methods and discuss the future directions of the area.","PeriodicalId":371584,"journal":{"name":"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIBGRAPI-T.2019.00009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The emergence of low-cost, high-quality personal wearable cameras combined with a large and increasing storage capacity of video-sharing websites have evoked a growing interest in first-person videos. A First-Person Video is usually composed of monotonous long-running unedited streams captured by a device attached to the user body, which makes it visually unpleasant and tedious to watch. Thus, there is a rise in the need to provide quick access to the information therein. In the last few years, a popular approach to retrieve the information from videos is to produce a short version of the input video by creating a video summary; however, this approach disrupts the temporal context of the recording. Fast-Forward is another approach that creates a shorter version of the video preserving the video context by increasing its playback speed. Although Fast-Forward methods keep the recording story, they do not consider the semantic load of the input video. The Semantic Fast-Forward approach creates a shorter version of First-Person Videos dealing with both video context and emphasis of the relevant portions to keep the semantic load of the input video. In this paper, we present a review of the representative methods in both fast-forward and semantic fast-forward methods and discuss the future directions of the area.