{"title":"Dynamic background substraction for object extraction using virtual reality based prediction","authors":"A. Dominguez-Caneda, C. Urdiales, F. Sandoval","doi":"10.1109/MELCON.2006.1653139","DOIUrl":null,"url":null,"abstract":"This paper presents a new approach to background substraction algorithms to extract video objects from a sequence. Rather than working with a fixed, flat background, the system relies on a virtual 3D model of the background that is automatically created and updated using a sequence of images of the environment. Each time an image is captured, the position of the camera is estimated and the corresponding view of the background can be rendered. The substraction between the frame and the view provides video objects not present in the background. In order to estimate the position of the camera to create the background model and render a background view, artificial landmarks of known size are distributed in the environment. The system works correctly in real environments, over 20 frames per second. It recovers from illumination changes and automatic white balance (AWB) thanks to our background updating algorithm","PeriodicalId":299928,"journal":{"name":"MELECON 2006 - 2006 IEEE Mediterranean Electrotechnical Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MELECON 2006 - 2006 IEEE Mediterranean Electrotechnical Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MELCON.2006.1653139","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
This paper presents a new approach to background substraction algorithms to extract video objects from a sequence. Rather than working with a fixed, flat background, the system relies on a virtual 3D model of the background that is automatically created and updated using a sequence of images of the environment. Each time an image is captured, the position of the camera is estimated and the corresponding view of the background can be rendered. The substraction between the frame and the view provides video objects not present in the background. In order to estimate the position of the camera to create the background model and render a background view, artificial landmarks of known size are distributed in the environment. The system works correctly in real environments, over 20 frames per second. It recovers from illumination changes and automatic white balance (AWB) thanks to our background updating algorithm