T. Madhumitha, Gudapati Sai Geethika, V. Radhesyam
{"title":"Real-time Recognition of Indian Sign Language using OpenCV and Deep Learning","authors":"T. Madhumitha, Gudapati Sai Geethika, V. Radhesyam","doi":"10.1109/INCET57972.2023.10170080","DOIUrl":null,"url":null,"abstract":"Sign language is a mechanism that uses hand gestures to facilitate communication between individuals with speaking or hearing impairments. Real-time sign language recognition provides a medium of communication between the general public and those who have difficulty with hearing or speaking. Different kinds of models are developed to provide a feasible solution for this problem. But the traditional models are either expensive or not customizable with limited gestures. In order to address this issue, a model has been developed that can recognize the sign language gestures immediately in real-time. This robust model provides an efficient way to recognize Indian Sign language (ISL) signs dynamically. The dataset is created in a customized manner to include ten phrases that convey comprehensive meaning. The captured data is augmented to identify gestures with different variations. A convolutional neural network has been employed to build the model and perform the multi-class classification on image data. The proposed model recognizes person’s gesture and provides a text output. The results and observations demonstrate that the model identifies a person’s signs accurately and efficiently in real-time. The customized model provides various advantages as new gestures can be added according to the requirement. The improvements suggest various methods that can be leveraged to upgrade the model.","PeriodicalId":403008,"journal":{"name":"2023 4th International Conference for Emerging Technology (INCET)","volume":"66 1-2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 4th International Conference for Emerging Technology (INCET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INCET57972.2023.10170080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Sign language is a mechanism that uses hand gestures to facilitate communication between individuals with speaking or hearing impairments. Real-time sign language recognition provides a medium of communication between the general public and those who have difficulty with hearing or speaking. Different kinds of models are developed to provide a feasible solution for this problem. But the traditional models are either expensive or not customizable with limited gestures. In order to address this issue, a model has been developed that can recognize the sign language gestures immediately in real-time. This robust model provides an efficient way to recognize Indian Sign language (ISL) signs dynamically. The dataset is created in a customized manner to include ten phrases that convey comprehensive meaning. The captured data is augmented to identify gestures with different variations. A convolutional neural network has been employed to build the model and perform the multi-class classification on image data. The proposed model recognizes person’s gesture and provides a text output. The results and observations demonstrate that the model identifies a person’s signs accurately and efficiently in real-time. The customized model provides various advantages as new gestures can be added according to the requirement. The improvements suggest various methods that can be leveraged to upgrade the model.