Rana Ashar, Burhan A. Sadiq, Hira Mohiuddin, Saniya Ashraf, Muhammad Imran, A. Ullah
{"title":"基于raft光流的视频防抖","authors":"Rana Ashar, Burhan A. Sadiq, Hira Mohiuddin, Saniya Ashraf, Muhammad Imran, A. Ullah","doi":"10.1109/ICRAI57502.2023.10089609","DOIUrl":null,"url":null,"abstract":"Video Stabilization is the basic need for modern-day video capture. Many methods have been proposed throughout the years including 2D and 3D-based models as well as models that use optimization and deep neural networks. This work describes the implementation of cutting-edge Recurrent All-Pairs Field Transforms (RAFT) for optical flow estimation in video stabilization. we use a pipeline that accommodates the large motion. It then passes the results to the optical flow for better accuracy. After that, it satisfies the inaccuracies of the optical flow and makes it robust to occlusion, Parallax, and moving objects. Our approach yields better results (visually and quantitatively) compared to other optimization and deep learning-based visual stabilization techniques.","PeriodicalId":447565,"journal":{"name":"2023 International Conference on Robotics and Automation in Industry (ICRAI)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Video Stabilization using RAFT-based Optical Flow\",\"authors\":\"Rana Ashar, Burhan A. Sadiq, Hira Mohiuddin, Saniya Ashraf, Muhammad Imran, A. Ullah\",\"doi\":\"10.1109/ICRAI57502.2023.10089609\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Video Stabilization is the basic need for modern-day video capture. Many methods have been proposed throughout the years including 2D and 3D-based models as well as models that use optimization and deep neural networks. This work describes the implementation of cutting-edge Recurrent All-Pairs Field Transforms (RAFT) for optical flow estimation in video stabilization. we use a pipeline that accommodates the large motion. It then passes the results to the optical flow for better accuracy. After that, it satisfies the inaccuracies of the optical flow and makes it robust to occlusion, Parallax, and moving objects. Our approach yields better results (visually and quantitatively) compared to other optimization and deep learning-based visual stabilization techniques.\",\"PeriodicalId\":447565,\"journal\":{\"name\":\"2023 International Conference on Robotics and Automation in Industry (ICRAI)\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Robotics and Automation in Industry (ICRAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRAI57502.2023.10089609\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Robotics and Automation in Industry (ICRAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRAI57502.2023.10089609","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Video Stabilization is the basic need for modern-day video capture. Many methods have been proposed throughout the years including 2D and 3D-based models as well as models that use optimization and deep neural networks. This work describes the implementation of cutting-edge Recurrent All-Pairs Field Transforms (RAFT) for optical flow estimation in video stabilization. we use a pipeline that accommodates the large motion. It then passes the results to the optical flow for better accuracy. After that, it satisfies the inaccuracies of the optical flow and makes it robust to occlusion, Parallax, and moving objects. Our approach yields better results (visually and quantitatively) compared to other optimization and deep learning-based visual stabilization techniques.