FgGAN: A Cascaded Unpaired Learning for Background Estimation and Foreground Segmentation

Prashant W. Patil, S. Murala
{"title":"FgGAN: A Cascaded Unpaired Learning for Background Estimation and Foreground Segmentation","authors":"Prashant W. Patil, S. Murala","doi":"10.1109/WACV.2019.00193","DOIUrl":null,"url":null,"abstract":"The moving object segmentation (MOS) in videos with bad weather, irregular motion of objects, camera jitter, shadow and dynamic background scenarios is still an open problem for computer vision applications. To address these issues, in this paper, we propose an approach named as Foreground Generative Adversarial Network (FgGAN) with the recent concepts of generative adversarial network (GAN) and unpaired training for background estimation and foreground segmentation. To the best of our knowledge, this is the first paper with the concept of GAN-based unpaired learning for MOS. Initially, video-wise background is estimated using GAN-based unpaired learning network (network-I). Then, to extract the motion information related to foreground, motion saliency is estimated using estimated background and current video frame. Further, estimated motion saliency is given as input to the GANbased unpaired learning network (network-II) for foreground segmentation. To examine the effectiveness of proposed FgGAN (cascaded networks I and II), the challenging video categories like dynamic background, bad weather, intermittent object motion and shadow are collected from ChangeDetection.net-2014 [26] database. The segmentation accuracy is observed qualitatively and quantitatively in terms of F-measure and percentage of wrong classification (PWC) and compared with the existing state-of-the-art methods. From experimental results, it is evident that the proposed FgGAN shows significant improvement in terms of F-measure and PWC as compared to the existing stateof-the-art methods for MOS.","PeriodicalId":436637,"journal":{"name":"2019 IEEE Winter Conference on Applications of Computer Vision (WACV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"30","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Winter Conference on Applications of Computer Vision (WACV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACV.2019.00193","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 30

Abstract

The moving object segmentation (MOS) in videos with bad weather, irregular motion of objects, camera jitter, shadow and dynamic background scenarios is still an open problem for computer vision applications. To address these issues, in this paper, we propose an approach named as Foreground Generative Adversarial Network (FgGAN) with the recent concepts of generative adversarial network (GAN) and unpaired training for background estimation and foreground segmentation. To the best of our knowledge, this is the first paper with the concept of GAN-based unpaired learning for MOS. Initially, video-wise background is estimated using GAN-based unpaired learning network (network-I). Then, to extract the motion information related to foreground, motion saliency is estimated using estimated background and current video frame. Further, estimated motion saliency is given as input to the GANbased unpaired learning network (network-II) for foreground segmentation. To examine the effectiveness of proposed FgGAN (cascaded networks I and II), the challenging video categories like dynamic background, bad weather, intermittent object motion and shadow are collected from ChangeDetection.net-2014 [26] database. The segmentation accuracy is observed qualitatively and quantitatively in terms of F-measure and percentage of wrong classification (PWC) and compared with the existing state-of-the-art methods. From experimental results, it is evident that the proposed FgGAN shows significant improvement in terms of F-measure and PWC as compared to the existing stateof-the-art methods for MOS.
FgGAN:一种用于背景估计和前景分割的级联非配对学习
在恶劣天气、物体不规则运动、摄像机抖动、阴影和动态背景场景下的视频运动目标分割(MOS)仍然是计算机视觉应用的一个开放性问题。为了解决这些问题,在本文中,我们提出了一种名为前景生成对抗网络(FgGAN)的方法,该方法结合了生成对抗网络(GAN)和背景估计和前景分割的非配对训练的最新概念。据我们所知,这是第一篇提出基于gan的非配对学习的MOS概念的论文。首先,使用基于gan的非配对学习网络(network- i)估计视频智能背景。然后,利用估计的背景和当前视频帧估计运动显著性,提取与前景相关的运动信息;此外,将估计的运动显著性作为输入输入到基于gan的不成对学习网络(network- ii)中,用于前景分割。为了检验所提出的FgGAN(级联网络I和II)的有效性,我们从ChangeDetection.net-2014[26]数据库中收集了动态背景、恶劣天气、间歇物体运动和阴影等具有挑战性的视频类别。通过f度量和错误分类百分比(PWC)对分割精度进行定性和定量观察,并与现有最先进的方法进行比较。从实验结果来看,与现有的最先进的MOS方法相比,所提出的FgGAN在F-measure和PWC方面表现出显着的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信