{"title":"跟踪人员并识别他们的活动","authors":"Deva Ramanan, D. Forsyth, Andrew Zisserman","doi":"10.1109/CVPR.2005.353","DOIUrl":null,"url":null,"abstract":"We present a system for automatic people tracking and activity recognition. Our basic approach to people-tracking is to build an appearance model for the person in the video. The video illustrates our method of using a stylized-pose detector. Our system builds a model of limb appearance from those sparse stylized detections. Our algorithm then reprocesses the video, using the learned appearance models to find people in unrestricted configuration. We can use our tracker to recover 3D configurations and activity labels. We assume we have a motion capture library where the 3D poses have been labeled offline with activity descriptions.","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"24 1","pages":"1194"},"PeriodicalIF":0.0000,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"Tracking People and Recognizing Their Activities\",\"authors\":\"Deva Ramanan, D. Forsyth, Andrew Zisserman\",\"doi\":\"10.1109/CVPR.2005.353\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a system for automatic people tracking and activity recognition. Our basic approach to people-tracking is to build an appearance model for the person in the video. The video illustrates our method of using a stylized-pose detector. Our system builds a model of limb appearance from those sparse stylized detections. Our algorithm then reprocesses the video, using the learned appearance models to find people in unrestricted configuration. We can use our tracker to recover 3D configurations and activity labels. We assume we have a motion capture library where the 3D poses have been labeled offline with activity descriptions.\",\"PeriodicalId\":89346,\"journal\":{\"name\":\"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops\",\"volume\":\"24 1\",\"pages\":\"1194\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPR.2005.353\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR.2005.353","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We present a system for automatic people tracking and activity recognition. Our basic approach to people-tracking is to build an appearance model for the person in the video. The video illustrates our method of using a stylized-pose detector. Our system builds a model of limb appearance from those sparse stylized detections. Our algorithm then reprocesses the video, using the learned appearance models to find people in unrestricted configuration. We can use our tracker to recover 3D configurations and activity labels. We assume we have a motion capture library where the 3D poses have been labeled offline with activity descriptions.