Semantic Hyperlapse: a Sparse Coding-based and Multi-Importance Approach for First-Person Videos

M. Silva, M. Campos, Erickson R. Nascimento
{"title":"Semantic Hyperlapse: a Sparse Coding-based and Multi-Importance Approach for First-Person Videos","authors":"M. Silva, M. Campos, Erickson R. Nascimento","doi":"10.5753/sibgrapi.est.2019.8302","DOIUrl":null,"url":null,"abstract":"The availability of low-cost and high-quality wearable cameras combined with the unlimited storage capacity of video-sharing websites have evoked a growing interest in First-Person Videos. Such videos are usually composed of long-running unedited streams captured by a device attached to the user body, which makes them tedious and visually unpleasant to watch. Consequently, it raises the need to provide quick access to the information therein. We propose a Sparse Coding based methodology to fast-forward First-Person Videos adaptively. Experimental evaluations show that the shorter version video resulting from the proposed method is more stable and retain more semantic information than the state-of-the-art. Visual results and graphical explanation of the methodology can be visualized through the link: https://youtu.be/rTEZurH64ME","PeriodicalId":304800,"journal":{"name":"Anais do Concurso de Teses e Dissertações da SBC (CTD-SBC 2020)","volume":"12 14","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Anais do Concurso de Teses e Dissertações da SBC (CTD-SBC 2020)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5753/sibgrapi.est.2019.8302","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The availability of low-cost and high-quality wearable cameras combined with the unlimited storage capacity of video-sharing websites have evoked a growing interest in First-Person Videos. Such videos are usually composed of long-running unedited streams captured by a device attached to the user body, which makes them tedious and visually unpleasant to watch. Consequently, it raises the need to provide quick access to the information therein. We propose a Sparse Coding based methodology to fast-forward First-Person Videos adaptively. Experimental evaluations show that the shorter version video resulting from the proposed method is more stable and retain more semantic information than the state-of-the-art. Visual results and graphical explanation of the methodology can be visualized through the link: https://youtu.be/rTEZurH64ME
语义超移:一种基于稀疏编码和多重要性的第一人称视频方法
低成本和高质量的可穿戴相机的可用性,加上视频分享网站的无限存储容量,引起了人们对第一人称视频日益增长的兴趣。这类视频通常由连接在用户身体上的设备捕获的长时间未编辑的流组成,这使得它们看起来乏味且视觉上令人不快。因此,它提出了提供快速查阅其中信息的需要。提出了一种基于稀疏编码的第一人称视频自适应快进方法。实验结果表明,该方法得到的短版本视频比现有方法更稳定,保留了更多的语义信息。可视化结果和图形解释的方法可以通过链接:https://youtu.be/rTEZurH64ME可视化
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信