{"title":"基于LQR时空融合技术的智能摄像头监控人脸轮廓采集","authors":"Chung-Ching Chang, H. Aghajan","doi":"10.1109/AVSS.2007.4425338","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a joint face orientation estimation technique for face profile collection in smart camera networks. The system is composed of in-node coarse estimation and joint refined estimation between cameras. In-node signal processing algorithms are designed to be lightweight to reduce computation load, yielding coarse estimates which may be erroneous. The proposed model-based technique determines the orientation and the angular motion of the face using two features, namely the hair-face ratio and the head optical flow. These features yield an estimate of the face orientation and the angular velocity through least squares (LS) analysis. In the joint refined estimation step, a discrete-time linear dynamical model is defined. Spatiotemporal consistency between cameras is measured by a cost function, which is minimized through linear quadratic regulation (LQR) to yield a robust closed-loop feedback system that estimates the face orientation, angular motion, and relative angular difference to the face between cameras. Based on the face orientation estimates, a collection of face profile are accumulated over time as the human subject moves around. The proposed technique does not require camera locations to be known in prior, and hence is applicable to vision networks deployed casually without localization.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"A LQR spatiotemporal fusion technique for face profile collection in smart camera surveillance\",\"authors\":\"Chung-Ching Chang, H. Aghajan\",\"doi\":\"10.1109/AVSS.2007.4425338\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a joint face orientation estimation technique for face profile collection in smart camera networks. The system is composed of in-node coarse estimation and joint refined estimation between cameras. In-node signal processing algorithms are designed to be lightweight to reduce computation load, yielding coarse estimates which may be erroneous. The proposed model-based technique determines the orientation and the angular motion of the face using two features, namely the hair-face ratio and the head optical flow. These features yield an estimate of the face orientation and the angular velocity through least squares (LS) analysis. In the joint refined estimation step, a discrete-time linear dynamical model is defined. Spatiotemporal consistency between cameras is measured by a cost function, which is minimized through linear quadratic regulation (LQR) to yield a robust closed-loop feedback system that estimates the face orientation, angular motion, and relative angular difference to the face between cameras. Based on the face orientation estimates, a collection of face profile are accumulated over time as the human subject moves around. The proposed technique does not require camera locations to be known in prior, and hence is applicable to vision networks deployed casually without localization.\",\"PeriodicalId\":371050,\"journal\":{\"name\":\"2007 IEEE Conference on Advanced Video and Signal Based Surveillance\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 IEEE Conference on Advanced Video and Signal Based Surveillance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AVSS.2007.4425338\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS.2007.4425338","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A LQR spatiotemporal fusion technique for face profile collection in smart camera surveillance
In this paper, we propose a joint face orientation estimation technique for face profile collection in smart camera networks. The system is composed of in-node coarse estimation and joint refined estimation between cameras. In-node signal processing algorithms are designed to be lightweight to reduce computation load, yielding coarse estimates which may be erroneous. The proposed model-based technique determines the orientation and the angular motion of the face using two features, namely the hair-face ratio and the head optical flow. These features yield an estimate of the face orientation and the angular velocity through least squares (LS) analysis. In the joint refined estimation step, a discrete-time linear dynamical model is defined. Spatiotemporal consistency between cameras is measured by a cost function, which is minimized through linear quadratic regulation (LQR) to yield a robust closed-loop feedback system that estimates the face orientation, angular motion, and relative angular difference to the face between cameras. Based on the face orientation estimates, a collection of face profile are accumulated over time as the human subject moves around. The proposed technique does not require camera locations to be known in prior, and hence is applicable to vision networks deployed casually without localization.