Deemah Alosail, Hussa Aldolah, Layla Alabdulwahab, A. Bashar, Majid Khan
{"title":"Smart Glove for Bi-lingual Sign Language Recognition using Machine Learning","authors":"Deemah Alosail, Hussa Aldolah, Layla Alabdulwahab, A. Bashar, Majid Khan","doi":"10.1109/IDCIoT56793.2023.10053470","DOIUrl":null,"url":null,"abstract":"The deaf community in our society has a right to live a comfortable and respectable life by having communication with normal people without any hurdles or impediments. To address this objective, several research attempts have been made to develop smart gloves to provide a means of converting sign language to speech or text. This research work has attempted to design, implement and test non-visual-based smart glove to improve performance accuracy and reduce implementation complexity. More specifically, five flex sensors and an accelerometer are used to enable sign language recognition and its further conversion into speech and textual information. Further, the prominent Machine Learning (ML) classifiers (LR, SVM, MLP and RF) are used for recognising both American Sign Language (ASL) and Arabic Sign Language (ArSL). Finally, a classification accuracy of 99.7% for ASL and 99.8% for ArSL with Random Forests (RF) classifier has been achieved. By considering the Feature Importance, the accelerometer features are considered as dominant features in recognizing the sign language when compared to the flex sensor features. In order to further advance this research work, the implementation and performance aspects of non-vision and vision-based sign language recognition can be compared.","PeriodicalId":60583,"journal":{"name":"物联网技术","volume":"29 2 1","pages":"409-415"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"物联网技术","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.1109/IDCIoT56793.2023.10053470","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The deaf community in our society has a right to live a comfortable and respectable life by having communication with normal people without any hurdles or impediments. To address this objective, several research attempts have been made to develop smart gloves to provide a means of converting sign language to speech or text. This research work has attempted to design, implement and test non-visual-based smart glove to improve performance accuracy and reduce implementation complexity. More specifically, five flex sensors and an accelerometer are used to enable sign language recognition and its further conversion into speech and textual information. Further, the prominent Machine Learning (ML) classifiers (LR, SVM, MLP and RF) are used for recognising both American Sign Language (ASL) and Arabic Sign Language (ArSL). Finally, a classification accuracy of 99.7% for ASL and 99.8% for ArSL with Random Forests (RF) classifier has been achieved. By considering the Feature Importance, the accelerometer features are considered as dominant features in recognizing the sign language when compared to the flex sensor features. In order to further advance this research work, the implementation and performance aspects of non-vision and vision-based sign language recognition can be compared.