{"title":"整合运动和照明模型的3D跟踪","authors":"A. Roy-Chowdhury, Yilei Xu","doi":"10.1109/CVIIE.2005.11","DOIUrl":null,"url":null,"abstract":"One of the persistent challenges in computer vision has been tracking objects under varying lighting conditions. In this paper we present a method for estimation of 3D motion of a rigid object from a monocular video sequence under arbitrary changes in the illumination conditions under which the video was captured. This is achieved by alternately estimating motion and illumination parameters using a generative model for integrating the effects of motion, illumination and structure within a unified mathematical framework. The motion is represented in terms of translation and rotation of the object centroid, and the illumination is represented using a spherical harmonics linear basis. The method does not assume any model for the variation of the illumination conditions - lighting can change slowly or drastically. For the multi-camera tracking scenario, we propose a new photometric constraint that is valid over the overlapping field of view between two cameras. This is similar in nature to the well-known epipolar constraint, except that it relates the photometric parameters, and can provide an additional constraint for illumination invariant multi-camera tracking. We demonstrate the effectiveness of our tracking algorithm on single and multi-camera video sequences under severe changes of lighting conditions.","PeriodicalId":447061,"journal":{"name":"Computer Vision for Interactive and Intelligent Environment (CVIIE'05)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Integrating Motion and Illumination Models for 3D Tracking\",\"authors\":\"A. Roy-Chowdhury, Yilei Xu\",\"doi\":\"10.1109/CVIIE.2005.11\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One of the persistent challenges in computer vision has been tracking objects under varying lighting conditions. In this paper we present a method for estimation of 3D motion of a rigid object from a monocular video sequence under arbitrary changes in the illumination conditions under which the video was captured. This is achieved by alternately estimating motion and illumination parameters using a generative model for integrating the effects of motion, illumination and structure within a unified mathematical framework. The motion is represented in terms of translation and rotation of the object centroid, and the illumination is represented using a spherical harmonics linear basis. The method does not assume any model for the variation of the illumination conditions - lighting can change slowly or drastically. For the multi-camera tracking scenario, we propose a new photometric constraint that is valid over the overlapping field of view between two cameras. This is similar in nature to the well-known epipolar constraint, except that it relates the photometric parameters, and can provide an additional constraint for illumination invariant multi-camera tracking. We demonstrate the effectiveness of our tracking algorithm on single and multi-camera video sequences under severe changes of lighting conditions.\",\"PeriodicalId\":447061,\"journal\":{\"name\":\"Computer Vision for Interactive and Intelligent Environment (CVIIE'05)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Vision for Interactive and Intelligent Environment (CVIIE'05)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVIIE.2005.11\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision for Interactive and Intelligent Environment (CVIIE'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVIIE.2005.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Integrating Motion and Illumination Models for 3D Tracking
One of the persistent challenges in computer vision has been tracking objects under varying lighting conditions. In this paper we present a method for estimation of 3D motion of a rigid object from a monocular video sequence under arbitrary changes in the illumination conditions under which the video was captured. This is achieved by alternately estimating motion and illumination parameters using a generative model for integrating the effects of motion, illumination and structure within a unified mathematical framework. The motion is represented in terms of translation and rotation of the object centroid, and the illumination is represented using a spherical harmonics linear basis. The method does not assume any model for the variation of the illumination conditions - lighting can change slowly or drastically. For the multi-camera tracking scenario, we propose a new photometric constraint that is valid over the overlapping field of view between two cameras. This is similar in nature to the well-known epipolar constraint, except that it relates the photometric parameters, and can provide an additional constraint for illumination invariant multi-camera tracking. We demonstrate the effectiveness of our tracking algorithm on single and multi-camera video sequences under severe changes of lighting conditions.