A robust background subtraction algorithm for motion based video scene segmentation in embedded platforms

Muhammad Haris Khan, I. Kypraios, U. Khan
{"title":"A robust background subtraction algorithm for motion based video scene segmentation in embedded platforms","authors":"Muhammad Haris Khan, I. Kypraios, U. Khan","doi":"10.1145/1838002.1838037","DOIUrl":null,"url":null,"abstract":"Recent work on wavelets applied to images or a video sequence has been exploited for extracting robust illumination invariant features. The paper presents robust background subtraction algorithm to segment motion based video scene in embedded platforms. Every machine or computer vision algorithm to be useful should be able to separate the different background and foreground information (e.g. objects) in the given scene. Therefore, it is essential to the success of any real time algorithm, the scene segmentation invariant to lighting conditions. We designed two main algorithms; Six frames (6-Frames) and Time Interval with Memory (TIME) to segment the video scene robustly based on motion detection in embedded platforms. The former uses the first six frames and the latter samples the frames at regular intervals of time with memory to generate a background reference frame. Our algorithms used bandpass video scene filtering with wavelets for extracting illumination invariant scene features and then combine them efficiently into the background reference frame. Hardware efficient image stabilization capability was added to remove the unwanted motion due to camera movement. The algorithms were tested using three moving bee videos sequences; static background, moving shadow and destabilized. Performance of algorithms was evaluated on the basis of number of frames in which the moving target was detected for each video sequence.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"33 5-6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Frontiers of Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1838002.1838037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

Recent work on wavelets applied to images or a video sequence has been exploited for extracting robust illumination invariant features. The paper presents robust background subtraction algorithm to segment motion based video scene in embedded platforms. Every machine or computer vision algorithm to be useful should be able to separate the different background and foreground information (e.g. objects) in the given scene. Therefore, it is essential to the success of any real time algorithm, the scene segmentation invariant to lighting conditions. We designed two main algorithms; Six frames (6-Frames) and Time Interval with Memory (TIME) to segment the video scene robustly based on motion detection in embedded platforms. The former uses the first six frames and the latter samples the frames at regular intervals of time with memory to generate a background reference frame. Our algorithms used bandpass video scene filtering with wavelets for extracting illumination invariant scene features and then combine them efficiently into the background reference frame. Hardware efficient image stabilization capability was added to remove the unwanted motion due to camera movement. The algorithms were tested using three moving bee videos sequences; static background, moving shadow and destabilized. Performance of algorithms was evaluated on the basis of number of frames in which the moving target was detected for each video sequence.
一种基于嵌入式平台运动视频场景分割的鲁棒背景相减算法
最近的研究工作将小波应用于图像或视频序列,用于提取鲁棒的光照不变性特征。针对嵌入式平台中基于运动的视频场景分割问题,提出了鲁棒的背景相减算法。每一个有用的机器或计算机视觉算法都应该能够在给定的场景中分离不同的背景和前景信息(例如物体)。因此,场景分割对光照条件的不变性是任何实时算法成功的关键。我们设计了两个主要算法;嵌入式平台中基于运动检测的六帧(6-Frames)和带记忆的时间间隔(Time Interval with Memory)对视频场景进行鲁棒分割。前者使用前六帧,后者每隔一定的时间间隔用记忆对这些帧进行采样以生成背景参考帧。我们的算法使用带通视频场景滤波和小波提取光照不变的场景特征,然后有效地将它们组合到背景参考帧中。硬件高效的图像稳定功能,以消除不必要的运动,由于相机的运动。使用三个移动的蜜蜂视频序列对算法进行了测试;静态背景,移动阴影和不稳定。根据每个视频序列中检测到运动目标的帧数来评估算法的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信