{"title":"Video Stabilization Based on Multi-scale Local Color Invariants","authors":"Kang Feng, Han Yonghua, Zhang Hua-xiong","doi":"10.1109/ICNDC.2013.35","DOIUrl":null,"url":null,"abstract":"Feature extraction and matching is the key process of motion estimation, and determines the performance of video stabilization to a great extent. A novel approach of video stabilization was proposed based on multi-scale colored local invariant features. The proposed approach transformed the image from RGB color model to color invariant model, and built up multi-scale color invariant space based on Gaussian pyramids, then extracted FAST feature points in the multiscale space and matched the feature points by building Fast Retina Key-point (FREAK) descriptors, finally estimated interframe motions in the video by M-estimator Sample Consensus (MSAC) algorithm, and processed image compensation and smoothing. Experiments demonstrated that the approach was efficient and more robust than general methods especial in harsh imaging conditions.","PeriodicalId":152234,"journal":{"name":"2013 Fourth International Conference on Networking and Distributed Computing","volume":"80 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Fourth International Conference on Networking and Distributed Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNDC.2013.35","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Feature extraction and matching is the key process of motion estimation, and determines the performance of video stabilization to a great extent. A novel approach of video stabilization was proposed based on multi-scale colored local invariant features. The proposed approach transformed the image from RGB color model to color invariant model, and built up multi-scale color invariant space based on Gaussian pyramids, then extracted FAST feature points in the multiscale space and matched the feature points by building Fast Retina Key-point (FREAK) descriptors, finally estimated interframe motions in the video by M-estimator Sample Consensus (MSAC) algorithm, and processed image compensation and smoothing. Experiments demonstrated that the approach was efficient and more robust than general methods especial in harsh imaging conditions.