{"title":"A robust background subtraction algorithm for motion based video scene segmentation in embedded platforms","authors":"Muhammad Haris Khan, I. Kypraios, U. Khan","doi":"10.1145/1838002.1838037","DOIUrl":null,"url":null,"abstract":"Recent work on wavelets applied to images or a video sequence has been exploited for extracting robust illumination invariant features. The paper presents robust background subtraction algorithm to segment motion based video scene in embedded platforms. Every machine or computer vision algorithm to be useful should be able to separate the different background and foreground information (e.g. objects) in the given scene. Therefore, it is essential to the success of any real time algorithm, the scene segmentation invariant to lighting conditions. We designed two main algorithms; Six frames (6-Frames) and Time Interval with Memory (TIME) to segment the video scene robustly based on motion detection in embedded platforms. The former uses the first six frames and the latter samples the frames at regular intervals of time with memory to generate a background reference frame. Our algorithms used bandpass video scene filtering with wavelets for extracting illumination invariant scene features and then combine them efficiently into the background reference frame. Hardware efficient image stabilization capability was added to remove the unwanted motion due to camera movement. The algorithms were tested using three moving bee videos sequences; static background, moving shadow and destabilized. Performance of algorithms was evaluated on the basis of number of frames in which the moving target was detected for each video sequence.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"33 5-6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Frontiers of Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1838002.1838037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Recent work on wavelets applied to images or a video sequence has been exploited for extracting robust illumination invariant features. The paper presents robust background subtraction algorithm to segment motion based video scene in embedded platforms. Every machine or computer vision algorithm to be useful should be able to separate the different background and foreground information (e.g. objects) in the given scene. Therefore, it is essential to the success of any real time algorithm, the scene segmentation invariant to lighting conditions. We designed two main algorithms; Six frames (6-Frames) and Time Interval with Memory (TIME) to segment the video scene robustly based on motion detection in embedded platforms. The former uses the first six frames and the latter samples the frames at regular intervals of time with memory to generate a background reference frame. Our algorithms used bandpass video scene filtering with wavelets for extracting illumination invariant scene features and then combine them efficiently into the background reference frame. Hardware efficient image stabilization capability was added to remove the unwanted motion due to camera movement. The algorithms were tested using three moving bee videos sequences; static background, moving shadow and destabilized. Performance of algorithms was evaluated on the basis of number of frames in which the moving target was detected for each video sequence.
最近的研究工作将小波应用于图像或视频序列,用于提取鲁棒的光照不变性特征。针对嵌入式平台中基于运动的视频场景分割问题,提出了鲁棒的背景相减算法。每一个有用的机器或计算机视觉算法都应该能够在给定的场景中分离不同的背景和前景信息(例如物体)。因此,场景分割对光照条件的不变性是任何实时算法成功的关键。我们设计了两个主要算法;嵌入式平台中基于运动检测的六帧(6-Frames)和带记忆的时间间隔(Time Interval with Memory)对视频场景进行鲁棒分割。前者使用前六帧,后者每隔一定的时间间隔用记忆对这些帧进行采样以生成背景参考帧。我们的算法使用带通视频场景滤波和小波提取光照不变的场景特征,然后有效地将它们组合到背景参考帧中。硬件高效的图像稳定功能,以消除不必要的运动,由于相机的运动。使用三个移动的蜜蜂视频序列对算法进行了测试;静态背景,移动阴影和不稳定。根据每个视频序列中检测到运动目标的帧数来评估算法的性能。