Oscar Nestares, Yoram Gat, H. Haussecker, I. Kozintsev
{"title":"通过融合方向传感器和图像对准数据实现视频稳定到全局三维参考框架","authors":"Oscar Nestares, Yoram Gat, H. Haussecker, I. Kozintsev","doi":"10.1109/ISMAR.2010.5643595","DOIUrl":null,"url":null,"abstract":"Estimating the 3D orientation of the camera in a video sequence within a global frame of reference is useful for video stabilization when displaying the video in a virtual 3D environment, as well as for accurate navigation and other applications. This task requires the input of orientation sensors attached to the camera to provide absolute 3D orientation in a geographical frame of reference. However, high-frequency noise in the sensor readings makes it impossible to achieve accurate orientation estimates required for visually stable presentation of video sequences that were acquired with a camera subject to jitter, such as a handheld camera or a vehicle mounted camera. On the other hand, image alignment has proven successful for image stabilization, providing accurate frame-to-frame orientation estimates but drifting over time due to error and bias accumulation and lacking absolute orientation. In this paper we propose a practical method for generating high accuracy estimates of the 3D orientation of the camera within a global frame of reference by fusing orientation estimates from an efficient image-based alignment method, and the estimates from an orientation sensor, overcoming the limitations of the component methods.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Video stabilization to a global 3D frame of reference by fusing orientation sensor and image alignment data\",\"authors\":\"Oscar Nestares, Yoram Gat, H. Haussecker, I. Kozintsev\",\"doi\":\"10.1109/ISMAR.2010.5643595\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Estimating the 3D orientation of the camera in a video sequence within a global frame of reference is useful for video stabilization when displaying the video in a virtual 3D environment, as well as for accurate navigation and other applications. This task requires the input of orientation sensors attached to the camera to provide absolute 3D orientation in a geographical frame of reference. However, high-frequency noise in the sensor readings makes it impossible to achieve accurate orientation estimates required for visually stable presentation of video sequences that were acquired with a camera subject to jitter, such as a handheld camera or a vehicle mounted camera. On the other hand, image alignment has proven successful for image stabilization, providing accurate frame-to-frame orientation estimates but drifting over time due to error and bias accumulation and lacking absolute orientation. In this paper we propose a practical method for generating high accuracy estimates of the 3D orientation of the camera within a global frame of reference by fusing orientation estimates from an efficient image-based alignment method, and the estimates from an orientation sensor, overcoming the limitations of the component methods.\",\"PeriodicalId\":250608,\"journal\":{\"name\":\"2010 IEEE International Symposium on Mixed and Augmented Reality\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 IEEE International Symposium on Mixed and Augmented Reality\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISMAR.2010.5643595\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE International Symposium on Mixed and Augmented Reality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR.2010.5643595","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Video stabilization to a global 3D frame of reference by fusing orientation sensor and image alignment data
Estimating the 3D orientation of the camera in a video sequence within a global frame of reference is useful for video stabilization when displaying the video in a virtual 3D environment, as well as for accurate navigation and other applications. This task requires the input of orientation sensors attached to the camera to provide absolute 3D orientation in a geographical frame of reference. However, high-frequency noise in the sensor readings makes it impossible to achieve accurate orientation estimates required for visually stable presentation of video sequences that were acquired with a camera subject to jitter, such as a handheld camera or a vehicle mounted camera. On the other hand, image alignment has proven successful for image stabilization, providing accurate frame-to-frame orientation estimates but drifting over time due to error and bias accumulation and lacking absolute orientation. In this paper we propose a practical method for generating high accuracy estimates of the 3D orientation of the camera within a global frame of reference by fusing orientation estimates from an efficient image-based alignment method, and the estimates from an orientation sensor, overcoming the limitations of the component methods.