{"title":"Human activity recognition using optical flow based feature set","authors":"S. S. Kumar, M. John","doi":"10.1109/CCST.2016.7815694","DOIUrl":null,"url":null,"abstract":"An optical flow based approach for recognizing human actions and human-human interactions in video sequences has been addressed in this paper. We propose a local descriptor built by optical flow vectors along the edges of the action performer(s). By using the proposed feature descriptor with multi-class SVM classifier, recognition rates as high as 95.69% and 94.62% have been achieved for Weizmann action dataset and KTH action dataset respectively. The recognition rate achieved is 92.7% for UT interaction Set_1, 90.21% for UT interaction Set_2. The results demonstrate that the method is simple and efficient.","PeriodicalId":6510,"journal":{"name":"2016 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"1 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"33","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Carnahan Conference on Security Technology (ICCST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCST.2016.7815694","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 33
Abstract
An optical flow based approach for recognizing human actions and human-human interactions in video sequences has been addressed in this paper. We propose a local descriptor built by optical flow vectors along the edges of the action performer(s). By using the proposed feature descriptor with multi-class SVM classifier, recognition rates as high as 95.69% and 94.62% have been achieved for Weizmann action dataset and KTH action dataset respectively. The recognition rate achieved is 92.7% for UT interaction Set_1, 90.21% for UT interaction Set_2. The results demonstrate that the method is simple and efficient.