{"title":"Online Video Stabilization Based on Converting Deep Dense Optical Flow to Motion Mesh","authors":"Luan Tran, N. Ly","doi":"10.1109/NICS51282.2020.9335882","DOIUrl":null,"url":null,"abstract":"Video stabilization is very necessary for shaky videos. Until now, there are many offline methods (using both past and future frames) for stabilization. These methods have good results for stabilizing, but not be consistent with real applications. So inspired by the approach, first, we divide each frame into grids and calculate motion vectors at each vertex. Second, accumulating motion mesh across past frames to get motion curves. Finally, smoothing these curves to stabilize video. The difference of our proposed method is the way to calculate motion mesh. Instead of propagating motion vectors at feature points to mesh vertexes, we take advantage of the power of deep learning network to estimate dense optical flow, then convert it to motion mesh. Our experiment has shown that output videos of our online method (only using past frames) have stability scores which are competitive with offline methods. Our method is still effective where the similarity between two consecutive frames is low (due to fast camera, fast zooming, etc.), in this case feature-based methods have not achieved good results.","PeriodicalId":308944,"journal":{"name":"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NICS51282.2020.9335882","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Video stabilization is very necessary for shaky videos. Until now, there are many offline methods (using both past and future frames) for stabilization. These methods have good results for stabilizing, but not be consistent with real applications. So inspired by the approach, first, we divide each frame into grids and calculate motion vectors at each vertex. Second, accumulating motion mesh across past frames to get motion curves. Finally, smoothing these curves to stabilize video. The difference of our proposed method is the way to calculate motion mesh. Instead of propagating motion vectors at feature points to mesh vertexes, we take advantage of the power of deep learning network to estimate dense optical flow, then convert it to motion mesh. Our experiment has shown that output videos of our online method (only using past frames) have stability scores which are competitive with offline methods. Our method is still effective where the similarity between two consecutive frames is low (due to fast camera, fast zooming, etc.), in this case feature-based methods have not achieved good results.