Image editing-based data augmentation for illumination-insensitive background subtraction

Dimitrios Sakkos, Edmond S. L. Ho, Hubert P. H. Shum, Garry Elvin
{"title":"Image editing-based data augmentation for illumination-insensitive background subtraction","authors":"Dimitrios Sakkos, Edmond S. L. Ho, Hubert P. H. Shum, Garry Elvin","doi":"10.1108/jeim-02-2020-0042","DOIUrl":null,"url":null,"abstract":"PurposeA core challenge in background subtraction (BGS) is handling videos with sudden illumination changes in consecutive frames. In our pilot study published in, Sakkos:SKIMA 2019, we tackle the problem from a data point-of-view using data augmentation. Our method performs data augmentation that not only creates endless data on the fly but also features semantic transformations of illumination which enhance the generalisation of the model.Design/methodology/approachIn our pilot study published in SKIMA 2019, the proposed framework successfully simulates flashes and shadows by applying the Euclidean distance transform over a binary mask generated randomly. In this paper, we further enhance the data augmentation framework by proposing new variations in image appearance both locally and globally.FindingsExperimental results demonstrate the contribution of the synthetics in the ability of the models to perform BGS even when significant illumination changes take place.Originality/valueSuch data augmentation allows us to effectively train an illumination-invariant deep learning model for BGS. We further propose a post-processing method that removes noise from the output binary map of segmentation, resulting in a cleaner, more accurate segmentation map that can generalise to multiple scenes of different conditions. We show that it is possible to train deep learning models even with very limited training samples. The source code of the project is made publicly available at https://github.com/dksakkos/illumination_augmentation","PeriodicalId":390951,"journal":{"name":"J. Enterp. Inf. Manag.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Enterp. Inf. Manag.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/jeim-02-2020-0042","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

PurposeA core challenge in background subtraction (BGS) is handling videos with sudden illumination changes in consecutive frames. In our pilot study published in, Sakkos:SKIMA 2019, we tackle the problem from a data point-of-view using data augmentation. Our method performs data augmentation that not only creates endless data on the fly but also features semantic transformations of illumination which enhance the generalisation of the model.Design/methodology/approachIn our pilot study published in SKIMA 2019, the proposed framework successfully simulates flashes and shadows by applying the Euclidean distance transform over a binary mask generated randomly. In this paper, we further enhance the data augmentation framework by proposing new variations in image appearance both locally and globally.FindingsExperimental results demonstrate the contribution of the synthetics in the ability of the models to perform BGS even when significant illumination changes take place.Originality/valueSuch data augmentation allows us to effectively train an illumination-invariant deep learning model for BGS. We further propose a post-processing method that removes noise from the output binary map of segmentation, resulting in a cleaner, more accurate segmentation map that can generalise to multiple scenes of different conditions. We show that it is possible to train deep learning models even with very limited training samples. The source code of the project is made publicly available at https://github.com/dksakkos/illumination_augmentation
基于图像编辑的光照不敏感背景减法数据增强
背景减法(BGS)的核心挑战是处理连续帧中光照突然变化的视频。在我们发表于《Sakkos:SKIMA 2019》的试点研究中,我们使用数据增强从数据角度解决了这个问题。我们的方法执行数据增强,不仅在飞行中创建无尽的数据,而且还具有光照的语义转换,增强了模型的泛化。在我们发表在SKIMA 2019上的试点研究中,所提出的框架通过在随机生成的二进制掩模上应用欧几里得距离变换,成功地模拟了闪光和阴影。在本文中,我们通过提出局部和全局图像外观的新变化来进一步增强数据增强框架。实验结果表明,即使在明显的光照变化发生时,合成材料对模型进行BGS的能力也有贡献。这种数据增强使我们能够有效地训练BGS的光照不变深度学习模型。我们进一步提出了一种后处理方法,该方法可以从分割输出的二值图中去除噪声,从而得到更清晰、更准确的分割图,可以推广到不同条件下的多个场景。我们表明,即使使用非常有限的训练样本,也可以训练深度学习模型。该项目的源代码可以在https://github.com/dksakkos/illumination_augmentation上公开获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信