{"title":"Multi-Temporal-Resolution Technique for Action Recognition using C3D: Experimental Study","authors":"Bassel S. Chawky, M. Marey, Howida A. Shedeed","doi":"10.1109/ICCES.2018.8639245","DOIUrl":null,"url":null,"abstract":"In any given video containing an action, the motion conveys information complementary to the individual frames. This motion varies in speed for similar actions. Therefore, it is a promising approach to train a separate deep-learning model for different versions of action speeds. In this paper, two novel ideas are explored: single-temporal-resolution single-model (STR-SM) and multi-temporal-resolution multi-model (MTR-MM). The STR-SM model is trained on one specific temporal resolution of the action dataset. This allows the model to accept a longer temporal frame range as input and therefore, a faster action classification. On the other hand, the MTR-MM is a set of STR-SM models, each trained on a different temporal resolution with a late fusion using majority voting achieving more accurate action recognition. Both models have improvements over the traditional training approach, 3.63% and 6% video-wise accuracy respectively.","PeriodicalId":113848,"journal":{"name":"2018 13th International Conference on Computer Engineering and Systems (ICCES)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 13th International Conference on Computer Engineering and Systems (ICCES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCES.2018.8639245","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In any given video containing an action, the motion conveys information complementary to the individual frames. This motion varies in speed for similar actions. Therefore, it is a promising approach to train a separate deep-learning model for different versions of action speeds. In this paper, two novel ideas are explored: single-temporal-resolution single-model (STR-SM) and multi-temporal-resolution multi-model (MTR-MM). The STR-SM model is trained on one specific temporal resolution of the action dataset. This allows the model to accept a longer temporal frame range as input and therefore, a faster action classification. On the other hand, the MTR-MM is a set of STR-SM models, each trained on a different temporal resolution with a late fusion using majority voting achieving more accurate action recognition. Both models have improvements over the traditional training approach, 3.63% and 6% video-wise accuracy respectively.