{"title":"An Update-Describe Approach for Human Action Recognition in Surveillance Video","authors":"A. Wiliem, V. Madasu, W. Boles, P. Yarlagadda","doi":"10.1109/DICTA.2010.55","DOIUrl":null,"url":null,"abstract":"In this paper, an approach for human action recognition is presented based on adaptive bag-of-words features. Bag-of-words techniques employ a codebook to describe a human action. For successful recognition, most action recognition systems currently require the optimal codebook size to be determined, as well as all instances of human actions to be available for computing the features. These requirements are difficult to satisfy in real life situations. An update - describe method for addressing these problems is proposed. Initially, interest point patches are extracted from action clips. Then, in the update step these patches are clustered using the Clustream algorithm. Each cluster centre corresponds to a visual word. A histogram of these visual words representing an action is constructed in the describe step. A chi-squared distance-based classifier is utilised for recognising actions. The proposed approach is implemented on benchmark KTH and Weizmann datasets.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 International Conference on Digital Image Computing: Techniques and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2010.55","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In this paper, an approach for human action recognition is presented based on adaptive bag-of-words features. Bag-of-words techniques employ a codebook to describe a human action. For successful recognition, most action recognition systems currently require the optimal codebook size to be determined, as well as all instances of human actions to be available for computing the features. These requirements are difficult to satisfy in real life situations. An update - describe method for addressing these problems is proposed. Initially, interest point patches are extracted from action clips. Then, in the update step these patches are clustered using the Clustream algorithm. Each cluster centre corresponds to a visual word. A histogram of these visual words representing an action is constructed in the describe step. A chi-squared distance-based classifier is utilised for recognising actions. The proposed approach is implemented on benchmark KTH and Weizmann datasets.