{"title":"Transfer Learning Based Method for Human Activity Recognition","authors":"S. Zebhi, S. Almodarresi, V. Abootalebi","doi":"10.1109/ICEE52715.2021.9544129","DOIUrl":null,"url":null,"abstract":"A gait history image (GHI) is a spatial template which accumulates areas of movement into a univalent template. A new descriptor named Time-sliced averaged gradient boundary magnitude (TAGBM) is constructed to show the time variations of motion. In the proposed approach, every video is parted into L and M groups of successive frames, so GHI and TAGBM are calculated for every group, resulting spatial and temporal templates. Transfer learning method has been utilized for classifying them. This proposed approach gets the recognition efficiencies of 96.5% and 92.7% for KTH and UCF Sport action datasets, respectively.","PeriodicalId":254932,"journal":{"name":"2021 29th Iranian Conference on Electrical Engineering (ICEE)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 29th Iranian Conference on Electrical Engineering (ICEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEE52715.2021.9544129","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
A gait history image (GHI) is a spatial template which accumulates areas of movement into a univalent template. A new descriptor named Time-sliced averaged gradient boundary magnitude (TAGBM) is constructed to show the time variations of motion. In the proposed approach, every video is parted into L and M groups of successive frames, so GHI and TAGBM are calculated for every group, resulting spatial and temporal templates. Transfer learning method has been utilized for classifying them. This proposed approach gets the recognition efficiencies of 96.5% and 92.7% for KTH and UCF Sport action datasets, respectively.