Md. Rashedul Islam, Umme Kulsum Mitu, R. Bhuiyan, Jungpil Shin
{"title":"Hand Gesture Feature Extraction Using Deep Convolutional Neural Network for Recognizing American Sign Language","authors":"Md. Rashedul Islam, Umme Kulsum Mitu, R. Bhuiyan, Jungpil Shin","doi":"10.1109/ICFSP.2018.8552044","DOIUrl":null,"url":null,"abstract":"In this era, Human-Computer Interaction (HCI) is a fascinating field about the interaction between humans and computers. Interacting with computers, human Hand Gesture Recognition (HGR) is the most significant way and the major part of HCI. Extracting features and detecting hand gesture from inputted color videos is more challenging because of the huge variation in the hands. For resolving this issue, this paper introduces an effective HGR system for low-cost color video using webcam. In this proposed model, Deep Convolutional Neural Network (DCNN) is used for extracting efficient hand features to recognize the American Sign Language (ASL) using hand gestures. Finally, the Multi-class Support Vector Machine (MCSVM) is used for identifying the hand sign, where CNN extracted features are used to train up the machine. Distinct person hand gesture is used for validation in this paper. The proposed model shows satisfactory performance in terms of classification accuracy, i.e., 94.57%","PeriodicalId":355222,"journal":{"name":"2018 4th International Conference on Frontiers of Signal Processing (ICFSP)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"37","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 4th International Conference on Frontiers of Signal Processing (ICFSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICFSP.2018.8552044","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 37
Abstract
In this era, Human-Computer Interaction (HCI) is a fascinating field about the interaction between humans and computers. Interacting with computers, human Hand Gesture Recognition (HGR) is the most significant way and the major part of HCI. Extracting features and detecting hand gesture from inputted color videos is more challenging because of the huge variation in the hands. For resolving this issue, this paper introduces an effective HGR system for low-cost color video using webcam. In this proposed model, Deep Convolutional Neural Network (DCNN) is used for extracting efficient hand features to recognize the American Sign Language (ASL) using hand gestures. Finally, the Multi-class Support Vector Machine (MCSVM) is used for identifying the hand sign, where CNN extracted features are used to train up the machine. Distinct person hand gesture is used for validation in this paper. The proposed model shows satisfactory performance in terms of classification accuracy, i.e., 94.57%