{"title":"使用RGB数据的人体动作识别","authors":"Amel Ben Mahjoub, Mohamed Atri","doi":"10.1109/IDT.2016.7843019","DOIUrl":null,"url":null,"abstract":"Human action recognition is an important computer vision research area, which is helpful in umpteen applications. This paper presents our method to recognize human activities. We use the Spatio-Temporal Interest Point (STIP) for detection of the important change in the image. Then, we extract appearance and motion features of these interest points using the histogram of Oriented Gradient (HOG) and Histogram of Optical Flow (HOF) descriptors. Finally, we match the Support Vector Machine (SVM) by Bag Of Word (BOW) of the space-time interest point descriptor to give the label of each video sequence. We perform our approach to UTD-MHAD complex dataset and it provides a good action recognition rate. Our proposed algorithm perform better than other methods based on the same sequence data of the public UTD-MHAD database.","PeriodicalId":131600,"journal":{"name":"2016 11th International Design & Test Symposium (IDT)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":"{\"title\":\"Human action recognition using RGB data\",\"authors\":\"Amel Ben Mahjoub, Mohamed Atri\",\"doi\":\"10.1109/IDT.2016.7843019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human action recognition is an important computer vision research area, which is helpful in umpteen applications. This paper presents our method to recognize human activities. We use the Spatio-Temporal Interest Point (STIP) for detection of the important change in the image. Then, we extract appearance and motion features of these interest points using the histogram of Oriented Gradient (HOG) and Histogram of Optical Flow (HOF) descriptors. Finally, we match the Support Vector Machine (SVM) by Bag Of Word (BOW) of the space-time interest point descriptor to give the label of each video sequence. We perform our approach to UTD-MHAD complex dataset and it provides a good action recognition rate. Our proposed algorithm perform better than other methods based on the same sequence data of the public UTD-MHAD database.\",\"PeriodicalId\":131600,\"journal\":{\"name\":\"2016 11th International Design & Test Symposium (IDT)\",\"volume\":\"105 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 11th International Design & Test Symposium (IDT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IDT.2016.7843019\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 11th International Design & Test Symposium (IDT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IDT.2016.7843019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human action recognition is an important computer vision research area, which is helpful in umpteen applications. This paper presents our method to recognize human activities. We use the Spatio-Temporal Interest Point (STIP) for detection of the important change in the image. Then, we extract appearance and motion features of these interest points using the histogram of Oriented Gradient (HOG) and Histogram of Optical Flow (HOF) descriptors. Finally, we match the Support Vector Machine (SVM) by Bag Of Word (BOW) of the space-time interest point descriptor to give the label of each video sequence. We perform our approach to UTD-MHAD complex dataset and it provides a good action recognition rate. Our proposed algorithm perform better than other methods based on the same sequence data of the public UTD-MHAD database.