A Novel Approach for Video Inpainting Using Autoencoders

Irfan A. Siddavatam, A. Dalvi, D. Pawade, A. Bhatt, Jyeshtha Vartak, Arnav Gupta
{"title":"A Novel Approach for Video Inpainting Using Autoencoders","authors":"Irfan A. Siddavatam, A. Dalvi, D. Pawade, A. Bhatt, Jyeshtha Vartak, Arnav Gupta","doi":"10.5815/ijieeb.2021.06.05","DOIUrl":null,"url":null,"abstract":"Inpainting is a task undertaken to fill in damaged or missing parts of an image or video frame, with believable content. The aim of this operation is to realistically complete images or frames of videos for a variety of applications such as conservation and restoration of art, editing images and videos for aesthetic purposes, but might cause malpractices such as evidence tampering. From the image and video editing perspective, inpainting is used mainly in the context of generating content to fill the gaps left after removing a particular object from the image or the video. Video Inpainting, an extension of Image Inpainting, is a much more challenging task due to the constraint added by the time dimension. Several techniques do exist that achieve the task of removing an object from a given video, but they are still in a nascent stage. The major objective of this paper is to study the available approaches of inpainting and propose a solution to the limitations of existing inpainting techniques. After studying existing inpainting techniques, we realized that most of them make use of a ground truth frame to generate plausible results. A 'ground truth' frame is an image without the target object or in other words, an image that provides maximum information about the background, which is then used to fill spaces after object removal. In this paper, we propose an approach where there is no requirement of a 'ground truth' frame, provided that the video has enough contexts available about the background that is to be recreated. We would be using frames from the video in hand, to gather context for the background. As the position of the target object to be removed will vary from one frame to the next, each subsequent frame will reveal the region that was initially behind the object, and provide more information about the background as a whole. Later, we have also discussed the potential limitations of our approach and some workarounds for the same, while showing the direction for further research.","PeriodicalId":427770,"journal":{"name":"International Journal of Information Engineering and Electronic Business","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Engineering and Electronic Business","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5815/ijieeb.2021.06.05","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Inpainting is a task undertaken to fill in damaged or missing parts of an image or video frame, with believable content. The aim of this operation is to realistically complete images or frames of videos for a variety of applications such as conservation and restoration of art, editing images and videos for aesthetic purposes, but might cause malpractices such as evidence tampering. From the image and video editing perspective, inpainting is used mainly in the context of generating content to fill the gaps left after removing a particular object from the image or the video. Video Inpainting, an extension of Image Inpainting, is a much more challenging task due to the constraint added by the time dimension. Several techniques do exist that achieve the task of removing an object from a given video, but they are still in a nascent stage. The major objective of this paper is to study the available approaches of inpainting and propose a solution to the limitations of existing inpainting techniques. After studying existing inpainting techniques, we realized that most of them make use of a ground truth frame to generate plausible results. A 'ground truth' frame is an image without the target object or in other words, an image that provides maximum information about the background, which is then used to fill spaces after object removal. In this paper, we propose an approach where there is no requirement of a 'ground truth' frame, provided that the video has enough contexts available about the background that is to be recreated. We would be using frames from the video in hand, to gather context for the background. As the position of the target object to be removed will vary from one frame to the next, each subsequent frame will reveal the region that was initially behind the object, and provide more information about the background as a whole. Later, we have also discussed the potential limitations of our approach and some workarounds for the same, while showing the direction for further research.
一种利用自编码器进行视频补图的新方法
补漆是用可信的内容填充图像或视频帧中受损或缺失的部分的任务。该操作的目的是真实地完成图像或视频帧,以供各种应用,例如保存和修复艺术品,编辑图像和视频以达到美学目的,但可能会导致诸如证据篡改等不当行为。从图像和视频编辑的角度来看,inpainting主要用于生成内容,以填补从图像或视频中删除特定对象后留下的空白。视频补绘是图像补绘的扩展,由于时间维度的限制,视频补绘是一项更具挑战性的任务。确实有几种技术可以实现从给定视频中删除物体的任务,但它们仍处于初级阶段。本文的主要目的是研究可用的涂漆方法,并针对现有涂漆技术的局限性提出解决方案。在研究了现有的图像绘制技术后,我们意识到它们中的大多数都使用了一个基本的真值框架来生成可信的结果。“ground truth”帧是一幅没有目标物体的图像,换句话说,一幅图像提供了关于背景的最大信息,然后用来填充物体移除后的空间。在本文中,我们提出了一种不需要“基础真相”框架的方法,只要视频有足够的关于要重建的背景的可用上下文。我们将使用手头视频中的帧,为背景收集上下文。由于要移除的目标物体的位置在下一帧中会有所不同,因此随后的每一帧都将显示最初在物体后面的区域,并提供有关整个背景的更多信息。随后,我们还讨论了该方法的潜在局限性和一些解决方法,同时指出了进一步研究的方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信