Darlan N. Brito, F. Pádua, R. Carceroni, G. Pereira
{"title":"Synchronizing Video Cameras with Non-overlapping Fields of View","authors":"Darlan N. Brito, F. Pádua, R. Carceroni, G. Pereira","doi":"10.1109/SIBGRAPI.2008.28","DOIUrl":null,"url":null,"abstract":"This paper describes a method to estimate the temporal alignment between N unsynchronized video sequences captured by cameras with non-overlapping fields of view. The sequences are recorded by stationary video cameras, with fixed intrinsic and extrinsic parameters. The proposed approach reduces the problem of synchronizing N non-overlapping sequences to the robust estimation of a single line in RN+1. This line captures all temporal relations between the sequences and a moving sensor in the scene, whose locations in the world coordinate system may be estimated at a constant sampling rate. Experimental results with real-world sequences show that our method can accurately align the videos.","PeriodicalId":330622,"journal":{"name":"2008 XXI Brazilian Symposium on Computer Graphics and Image Processing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 XXI Brazilian Symposium on Computer Graphics and Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIBGRAPI.2008.28","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
This paper describes a method to estimate the temporal alignment between N unsynchronized video sequences captured by cameras with non-overlapping fields of view. The sequences are recorded by stationary video cameras, with fixed intrinsic and extrinsic parameters. The proposed approach reduces the problem of synchronizing N non-overlapping sequences to the robust estimation of a single line in RN+1. This line captures all temporal relations between the sequences and a moving sensor in the scene, whose locations in the world coordinate system may be estimated at a constant sampling rate. Experimental results with real-world sequences show that our method can accurately align the videos.