Mayeesha Mahzabin, M. Hasan, Sabrina Nahar, Mosabber Uddin Ahmed
{"title":"Automated Hand Gesture Recognition Using Machine Learning","authors":"Mayeesha Mahzabin, M. Hasan, Sabrina Nahar, Mosabber Uddin Ahmed","doi":"10.1109/ICCIT54785.2021.9689817","DOIUrl":null,"url":null,"abstract":"The hand gesture and sign language are extraordinary means of communication between ordinary people and the deaf-mute people. Although an ordinary person usually does not understand sign language, automated recognition systems can be used to overcome this barrier. In this research, we opted for accurate recognition of hand gestures using four different machine learning models on two different datasets- American Sign Language (ASL) and a general gesture set to classify both static and dynamic gestures for the English Language. All the classifiers used had shown good accuracy on both datasets, and we got even better results after normalization. The first model of Artificial Neural Network (ANN) was proposed by varying the input, hidden and output layers that gave an accuracy of 99.40%. The second model of K-Nearest Neighbors (KNN) acquired an accuracy of 99.14%. The third model of Decision Tree (DT) achieved an accuracy of 94.52%. Finally, we took the ensemble of these models using the Ensemble vote classifier which demonstrated an overall predictive performance and proved to be a much more generalized model with an accuracy of 99.45%. In the case of dynamic gestures, we got 100% accuracy for three gestures.","PeriodicalId":166450,"journal":{"name":"2021 24th International Conference on Computer and Information Technology (ICCIT)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 24th International Conference on Computer and Information Technology (ICCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIT54785.2021.9689817","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The hand gesture and sign language are extraordinary means of communication between ordinary people and the deaf-mute people. Although an ordinary person usually does not understand sign language, automated recognition systems can be used to overcome this barrier. In this research, we opted for accurate recognition of hand gestures using four different machine learning models on two different datasets- American Sign Language (ASL) and a general gesture set to classify both static and dynamic gestures for the English Language. All the classifiers used had shown good accuracy on both datasets, and we got even better results after normalization. The first model of Artificial Neural Network (ANN) was proposed by varying the input, hidden and output layers that gave an accuracy of 99.40%. The second model of K-Nearest Neighbors (KNN) acquired an accuracy of 99.14%. The third model of Decision Tree (DT) achieved an accuracy of 94.52%. Finally, we took the ensemble of these models using the Ensemble vote classifier which demonstrated an overall predictive performance and proved to be a much more generalized model with an accuracy of 99.45%. In the case of dynamic gestures, we got 100% accuracy for three gestures.