{"title":"基于时间运动模板的人体动作识别","authors":"Samy Bakheet, A. Al-Hamadi, M. Mofaddel","doi":"10.9734/bjast/2017/28318","DOIUrl":null,"url":null,"abstract":"Despite their attractive properties of invariance, robustness and reliability, statistical motion descriptions from temporal templates have not apparently received the amount of attention they might deserve in the human action recognition literature. In this paper, we propose an innovative approach for action recognition, where a novel fuzzy representation based on temporal motion templates is developed to model human actions as time series of low-dimensional descriptors. An NB (Na¨ıve Bayes) classifier is trained on these features for action classification. When tested on a realistic action dataset incorporating a large collection of video data, the results demonstrate that the approach is able to achieve a recognition rate of as high as 93.7%, while remaining tractable for real-time operation.","PeriodicalId":91221,"journal":{"name":"British journal of applied science & technology","volume":" ","pages":"1-11"},"PeriodicalIF":0.0000,"publicationDate":"2017-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Recognition of Human Actions Based on Temporal Motion Templates\",\"authors\":\"Samy Bakheet, A. Al-Hamadi, M. Mofaddel\",\"doi\":\"10.9734/bjast/2017/28318\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Despite their attractive properties of invariance, robustness and reliability, statistical motion descriptions from temporal templates have not apparently received the amount of attention they might deserve in the human action recognition literature. In this paper, we propose an innovative approach for action recognition, where a novel fuzzy representation based on temporal motion templates is developed to model human actions as time series of low-dimensional descriptors. An NB (Na¨ıve Bayes) classifier is trained on these features for action classification. When tested on a realistic action dataset incorporating a large collection of video data, the results demonstrate that the approach is able to achieve a recognition rate of as high as 93.7%, while remaining tractable for real-time operation.\",\"PeriodicalId\":91221,\"journal\":{\"name\":\"British journal of applied science & technology\",\"volume\":\" \",\"pages\":\"1-11\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British journal of applied science & technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.9734/bjast/2017/28318\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British journal of applied science & technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.9734/bjast/2017/28318","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recognition of Human Actions Based on Temporal Motion Templates
Despite their attractive properties of invariance, robustness and reliability, statistical motion descriptions from temporal templates have not apparently received the amount of attention they might deserve in the human action recognition literature. In this paper, we propose an innovative approach for action recognition, where a novel fuzzy representation based on temporal motion templates is developed to model human actions as time series of low-dimensional descriptors. An NB (Na¨ıve Bayes) classifier is trained on these features for action classification. When tested on a realistic action dataset incorporating a large collection of video data, the results demonstrate that the approach is able to achieve a recognition rate of as high as 93.7%, while remaining tractable for real-time operation.