Video Generation and Synthesis Network for Long-term Video Interpolation

Na-young Kim, Jung Kyung Lee, C. Yoo, Seunghyun Cho, Jewon Kang
{"title":"Video Generation and Synthesis Network for Long-term Video Interpolation","authors":"Na-young Kim, Jung Kyung Lee, C. Yoo, Seunghyun Cho, Jewon Kang","doi":"10.23919/APSIPA.2018.8659743","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a bidirectional synthesis video interpolation technique based on deep learning, using a forward and a backward video generation network and a synthesis network. The forward generation network first extrapolates a video sequence, given the past video frames, and then the backward generation network generates the same video sequence, given the future video frames. Next, a synthesis network fuses the results of the two generation networks to create an intermediate video sequence. To jointly train the video generation and synthesis networks, we define a cost function to approximate the visual quality and the motion of the interpolated video as close as possible to those of the original video. Experimental results show that the proposed technique outperforms the state-of-the art long-term video interpolation model based on deep learning.","PeriodicalId":287799,"journal":{"name":"2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/APSIPA.2018.8659743","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

In this paper, we propose a bidirectional synthesis video interpolation technique based on deep learning, using a forward and a backward video generation network and a synthesis network. The forward generation network first extrapolates a video sequence, given the past video frames, and then the backward generation network generates the same video sequence, given the future video frames. Next, a synthesis network fuses the results of the two generation networks to create an intermediate video sequence. To jointly train the video generation and synthesis networks, we define a cost function to approximate the visual quality and the motion of the interpolated video as close as possible to those of the original video. Experimental results show that the proposed technique outperforms the state-of-the art long-term video interpolation model based on deep learning.
用于长期视频插值的视频生成与合成网络
在本文中,我们提出了一种基于深度学习的双向合成视频插值技术,使用前向和后向视频生成网络和合成网络。前向生成网络首先根据过去的视频帧外推视频序列,然后后向生成网络根据未来的视频帧生成相同的视频序列。接下来,一个合成网络融合两代网络的结果来创建一个中间视频序列。为了联合训练视频生成和合成网络,我们定义了一个代价函数来近似插值视频的视觉质量和运动,使其尽可能接近原始视频的视觉质量和运动。实验结果表明,该方法优于基于深度学习的长期视频插值模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信