Eventfulness for Interactive Video Alignment

Jiatian Sun, Longxiuling Deng, Triantafyllos Afouras, Andrew Owens, Abe Davis
{"title":"Eventfulness for Interactive Video Alignment","authors":"Jiatian Sun, Longxiuling Deng, Triantafyllos Afouras, Andrew Owens, Abe Davis","doi":"10.1145/3592118","DOIUrl":null,"url":null,"abstract":"Humans are remarkably sensitive to the alignment of visual events with other stimuli, which makes synchronization one of the hardest tasks in video editing. A key observation of our work is that most of the alignment we do involves salient localizable events that occur sparsely in time. By learning how to recognize these events, we can greatly reduce the space of possible synchronizations that an editor or algorithm has to consider. Furthermore, by learning descriptors of these events that capture additional properties of visible motion, we can build active tools that adapt their notion of eventfulness to a given task as they are being used. Rather than learning an automatic solution to one specific problem, our goal is to make a much broader class of interactive alignment tasks significantly easier and less time-consuming. We show that a suitable visual event descriptor can be learned entirely from stochastically-generated synthetic video. We then demonstrate the usefulness of learned and adaptive eventfulness by integrating it in novel interactive tools for applications including audio-driven time warping of video and the extraction and application of sound effects across different videos.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"79 1","pages":"1 - 10"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Graphics (TOG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3592118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Humans are remarkably sensitive to the alignment of visual events with other stimuli, which makes synchronization one of the hardest tasks in video editing. A key observation of our work is that most of the alignment we do involves salient localizable events that occur sparsely in time. By learning how to recognize these events, we can greatly reduce the space of possible synchronizations that an editor or algorithm has to consider. Furthermore, by learning descriptors of these events that capture additional properties of visible motion, we can build active tools that adapt their notion of eventfulness to a given task as they are being used. Rather than learning an automatic solution to one specific problem, our goal is to make a much broader class of interactive alignment tasks significantly easier and less time-consuming. We show that a suitable visual event descriptor can be learned entirely from stochastically-generated synthetic video. We then demonstrate the usefulness of learned and adaptive eventfulness by integrating it in novel interactive tools for applications including audio-driven time warping of video and the extraction and application of sound effects across different videos.
交互式视频对齐的事件性
人类对视觉事件与其他刺激的一致性非常敏感,这使得同步成为视频编辑中最难的任务之一。对我们工作的一个关键观察是,我们所做的大多数对齐都涉及到在时间上稀疏发生的显著的可本地化事件。通过学习如何识别这些事件,我们可以大大减少编辑器或算法必须考虑的可能同步的空间。此外,通过学习这些捕获可见运动附加属性的事件描述符,我们可以构建主动工具,使它们在使用时适应给定任务的事件概念。我们的目标不是学习一个特定问题的自动解决方案,而是使更广泛的交互式校准任务类别变得更容易,更节省时间。我们证明了一个合适的视觉事件描述符可以完全从随机生成的合成视频中学习到。然后,我们通过将其集成到新的交互式工具中来演示学习和自适应事件性的有用性,这些工具包括音频驱动的视频时间扭曲以及跨不同视频的声音效果的提取和应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信