{"title":"在未示例的观看条件下使用不变特征的动作识别","authors":"Litian Sun, K. Aizawa","doi":"10.1145/2502081.2508126","DOIUrl":null,"url":null,"abstract":"A great challenge in real-world applications of action recognition is the lack of sufficient label information because of variance in the recording viewpoint and differences between individuals. A system that can adapt itself according to these variances is required for practical use. We present a generic method for extracting view-invariant features from skeleton joints. These view-invariant features are further refined using a stacked, compact autoencoder. To model the challenge of real-world applications, two unexampled test settings (NewView and NewPerson) are used to evaluate the proposed method. Experimental results with these test settings demonstrate the effectiveness of our method.","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"5 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Action recognition using invariant features under unexampled viewing conditions\",\"authors\":\"Litian Sun, K. Aizawa\",\"doi\":\"10.1145/2502081.2508126\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A great challenge in real-world applications of action recognition is the lack of sufficient label information because of variance in the recording viewpoint and differences between individuals. A system that can adapt itself according to these variances is required for practical use. We present a generic method for extracting view-invariant features from skeleton joints. These view-invariant features are further refined using a stacked, compact autoencoder. To model the challenge of real-world applications, two unexampled test settings (NewView and NewPerson) are used to evaluate the proposed method. Experimental results with these test settings demonstrate the effectiveness of our method.\",\"PeriodicalId\":20448,\"journal\":{\"name\":\"Proceedings of the 21st ACM international conference on Multimedia\",\"volume\":\"5 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-10-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 21st ACM international conference on Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2502081.2508126\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 21st ACM international conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2502081.2508126","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Action recognition using invariant features under unexampled viewing conditions
A great challenge in real-world applications of action recognition is the lack of sufficient label information because of variance in the recording viewpoint and differences between individuals. A system that can adapt itself according to these variances is required for practical use. We present a generic method for extracting view-invariant features from skeleton joints. These view-invariant features are further refined using a stacked, compact autoencoder. To model the challenge of real-world applications, two unexampled test settings (NewView and NewPerson) are used to evaluate the proposed method. Experimental results with these test settings demonstrate the effectiveness of our method.