{"title":"Motion Flow Feature Algorithm for Action Recognition in Videos","authors":"Run Ye, B. Yan, Shi-Dong Hou, Xiaokang Jing","doi":"10.1109/ISCID51228.2020.00049","DOIUrl":null,"url":null,"abstract":"Motion representation becomes the critical factor since more and more trimmed video action recognition tasks rely on machine learning. In this paper, we proposed a new motion representation whose hint is from optical flow algorithms which have been proved to be effective and efficient in the aspect of video action recognition. Our motion flow is one kind of modality which is different from RGB, RGB diff, and the most popular optical flow, although the methodology is derived from the optical flow, it is faster and more accurate than optical flow algorithms. Furthermore, we introduced the most excellent convolutional neural network framework named densely connected convolutional networks (DenseNet) to optimize the networks and we use the motion flow as the inputs of the framework. We achieve experimental evaluations, when the proposed motion representation is plugged into the DenseNet framework, the accuracy on the UCF-101 and HMDB-51 is 96% and 74.2% respectively, which turn out to be our proposed methodology is satisfactory and 15 times faster in speed.","PeriodicalId":236797,"journal":{"name":"2020 13th International Symposium on Computational Intelligence and Design (ISCID)","volume":"249 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 13th International Symposium on Computational Intelligence and Design (ISCID)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCID51228.2020.00049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Motion representation becomes the critical factor since more and more trimmed video action recognition tasks rely on machine learning. In this paper, we proposed a new motion representation whose hint is from optical flow algorithms which have been proved to be effective and efficient in the aspect of video action recognition. Our motion flow is one kind of modality which is different from RGB, RGB diff, and the most popular optical flow, although the methodology is derived from the optical flow, it is faster and more accurate than optical flow algorithms. Furthermore, we introduced the most excellent convolutional neural network framework named densely connected convolutional networks (DenseNet) to optimize the networks and we use the motion flow as the inputs of the framework. We achieve experimental evaluations, when the proposed motion representation is plugged into the DenseNet framework, the accuracy on the UCF-101 and HMDB-51 is 96% and 74.2% respectively, which turn out to be our proposed methodology is satisfactory and 15 times faster in speed.