Exemplar-based video inpainting approach using temporal relationship of consecutive frames

Kuo-Lung Hung, Shih-Che Lai
{"title":"Exemplar-based video inpainting approach using temporal relationship of consecutive frames","authors":"Kuo-Lung Hung, Shih-Che Lai","doi":"10.1109/ICAWST.2017.8256482","DOIUrl":null,"url":null,"abstract":"Digital inpainting is a technique used to remove some specified objects or to repair damaged area in an image or a video file. It has a wide used range in many applications such as in film and entertainment production, digital archives repairing, and satellite photography making. The main challenge in video inpainting is the patched video sequence ought to remain as much visual quality as the original one. To meet this requirement, video inpainting must have a robust object tracking algorithm with considering factors of the continuity of temporal relationship between frames, especially for camera-moving cases. In this paper, we propose a video inpainting algorithm based on the exemplar-based method [2]. In the proposed method, we first use Harris corner detection to extract feature points of each frame and match them between consecutive frames. The frames with same feature points are aligned by using affine transformation, and the median filter method is then used to establish a dynamic panoramic background. Next, we propose a robust object tracking method by using three-steps searching algorithm wherein moving objects can be find in different frames. Then, the foreground objects of each frame are removed by using background subtraction. Finally, we use an exemplar-based video inpainting method to patch the video. Experimental results showed that the proposed method can not only accurately track the moving objects but also can repair the video without losing linear structure of image (frame) and will not produce blur. In sum, the proposed method is an efficient and effective video patching method that repairs the video of acceptable quality.","PeriodicalId":378618,"journal":{"name":"2017 IEEE 8th International Conference on Awareness Science and Technology (iCAST)","volume":"229 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 8th International Conference on Awareness Science and Technology (iCAST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAWST.2017.8256482","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Digital inpainting is a technique used to remove some specified objects or to repair damaged area in an image or a video file. It has a wide used range in many applications such as in film and entertainment production, digital archives repairing, and satellite photography making. The main challenge in video inpainting is the patched video sequence ought to remain as much visual quality as the original one. To meet this requirement, video inpainting must have a robust object tracking algorithm with considering factors of the continuity of temporal relationship between frames, especially for camera-moving cases. In this paper, we propose a video inpainting algorithm based on the exemplar-based method [2]. In the proposed method, we first use Harris corner detection to extract feature points of each frame and match them between consecutive frames. The frames with same feature points are aligned by using affine transformation, and the median filter method is then used to establish a dynamic panoramic background. Next, we propose a robust object tracking method by using three-steps searching algorithm wherein moving objects can be find in different frames. Then, the foreground objects of each frame are removed by using background subtraction. Finally, we use an exemplar-based video inpainting method to patch the video. Experimental results showed that the proposed method can not only accurately track the moving objects but also can repair the video without losing linear structure of image (frame) and will not produce blur. In sum, the proposed method is an efficient and effective video patching method that repairs the video of acceptable quality.
基于实例的基于连续帧时间关系的视频绘制方法
数字补漆是一种用于删除某些特定对象或修复图像或视频文件中受损区域的技术。广泛应用于影视娱乐制作、数字档案修复、卫星摄影制作等领域。视频补漆的主要挑战是修补后的视频序列应该保持与原始视频一样的视觉质量。为了满足这一要求,视频喷漆必须有一个鲁棒的目标跟踪算法,并考虑帧间时间关系的连续性因素,特别是对于摄像机移动的情况。本文提出了一种基于样本方法[2]的视频补图算法。在该方法中,我们首先使用Harris角点检测提取每一帧的特征点,并在连续帧之间进行匹配。采用仿射变换对具有相同特征点的帧进行对齐,然后采用中值滤波方法建立动态全景背景。接下来,我们提出了一种鲁棒的目标跟踪方法,该方法采用三步搜索算法,可以在不同帧中找到运动目标。然后,利用背景减法去除每一帧的前景目标。最后,我们使用基于示例的视频补图方法对视频进行修补。实验结果表明,该方法不仅可以准确地跟踪运动目标,而且可以在不丢失图像(帧)线性结构的情况下对视频进行修复,不会产生模糊。综上所述,该方法是一种高效、有效的视频修补方法,能够修复出质量可接受的视频。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信