{"title":"基于反投影流的全向自我运动估计","authors":"O. Shakernia, R. Vidal, S. Sastry","doi":"10.1109/CVPRW.2003.10074","DOIUrl":null,"url":null,"abstract":"The current state-of-the-art for egomotion estimation with omnidirectional cameras is to map the optical flow to the sphere and then apply egomotion algorithms for spherical projection. In this paper, we propose to back-project image points to a virtual curved retina that is intrinsic to the geometry of the central panoramic camera, and compute the optical flow on this retina: the so-called back-projection flow. We show that well-known egomotion algorithms can be easily adapted to work with the back-projection flow. We present extensive simulation results showing that in the presence of noise, egomotion algorithms perform better by using back-projection flow when the camera translation is in the X-Y plane. Thus, the proposed method is preferable in applications where there is no Z-axis translation, such as ground robot navigation.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"35","resultStr":"{\"title\":\"Omnidirectional Egomotion Estimation From Back-projection Flow\",\"authors\":\"O. Shakernia, R. Vidal, S. Sastry\",\"doi\":\"10.1109/CVPRW.2003.10074\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The current state-of-the-art for egomotion estimation with omnidirectional cameras is to map the optical flow to the sphere and then apply egomotion algorithms for spherical projection. In this paper, we propose to back-project image points to a virtual curved retina that is intrinsic to the geometry of the central panoramic camera, and compute the optical flow on this retina: the so-called back-projection flow. We show that well-known egomotion algorithms can be easily adapted to work with the back-projection flow. We present extensive simulation results showing that in the presence of noise, egomotion algorithms perform better by using back-projection flow when the camera translation is in the X-Y plane. Thus, the proposed method is preferable in applications where there is no Z-axis translation, such as ground robot navigation.\",\"PeriodicalId\":121249,\"journal\":{\"name\":\"2003 Conference on Computer Vision and Pattern Recognition Workshop\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-06-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"35\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2003 Conference on Computer Vision and Pattern Recognition Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPRW.2003.10074\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2003 Conference on Computer Vision and Pattern Recognition Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2003.10074","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Omnidirectional Egomotion Estimation From Back-projection Flow
The current state-of-the-art for egomotion estimation with omnidirectional cameras is to map the optical flow to the sphere and then apply egomotion algorithms for spherical projection. In this paper, we propose to back-project image points to a virtual curved retina that is intrinsic to the geometry of the central panoramic camera, and compute the optical flow on this retina: the so-called back-projection flow. We show that well-known egomotion algorithms can be easily adapted to work with the back-projection flow. We present extensive simulation results showing that in the presence of noise, egomotion algorithms perform better by using back-projection flow when the camera translation is in the X-Y plane. Thus, the proposed method is preferable in applications where there is no Z-axis translation, such as ground robot navigation.