Fladio Armandika, E. C. Djamal, Fikri Nugraha, Fatan Kasyidi
{"title":"Dynamic Hand Gesture Recognition Using Temporal-Stream Convolutional Neural Networks","authors":"Fladio Armandika, E. C. Djamal, Fikri Nugraha, Fatan Kasyidi","doi":"10.23919/EECSI50503.2020.9251902","DOIUrl":null,"url":null,"abstract":"Movement recognition is a hot issue in machine learning. The gesture recognition is related to video processing, which gives problems in various aspects. Some of them are separating the image against the background firmly. This problem has consequences when there are incredibly different settings from the training data. The next challenge is the number of images processed at a time that forms motion. Previous studies have conducted experiments on the Deep Convolutional Neural Network architecture to detect actions on sequential model balancing each other on frames and motion between frames. The challenge of identifying objects in a temporal video image is the number of parameters needed to do a simple video classification so that the estimated motion of the object in each picture frame is needed. This paper proposed the classification of hand movement patterns with the Single Stream Temporal Convolutional Neural Networks approach. This model was robust against extreme non-training data, giving an accuracy of up to 81,7%. The model used a 50 layers ResNet architecture with recorded video training.","PeriodicalId":6743,"journal":{"name":"2020 7th International Conference on Electrical Engineering, Computer Sciences and Informatics (EECSI)","volume":"110 1","pages":"132-136"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 7th International Conference on Electrical Engineering, Computer Sciences and Informatics (EECSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/EECSI50503.2020.9251902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Movement recognition is a hot issue in machine learning. The gesture recognition is related to video processing, which gives problems in various aspects. Some of them are separating the image against the background firmly. This problem has consequences when there are incredibly different settings from the training data. The next challenge is the number of images processed at a time that forms motion. Previous studies have conducted experiments on the Deep Convolutional Neural Network architecture to detect actions on sequential model balancing each other on frames and motion between frames. The challenge of identifying objects in a temporal video image is the number of parameters needed to do a simple video classification so that the estimated motion of the object in each picture frame is needed. This paper proposed the classification of hand movement patterns with the Single Stream Temporal Convolutional Neural Networks approach. This model was robust against extreme non-training data, giving an accuracy of up to 81,7%. The model used a 50 layers ResNet architecture with recorded video training.