基于OpenCV和深度学习的印度手语实时识别

T. Madhumitha, Gudapati Sai Geethika, V. Radhesyam
{"title":"基于OpenCV和深度学习的印度手语实时识别","authors":"T. Madhumitha, Gudapati Sai Geethika, V. Radhesyam","doi":"10.1109/INCET57972.2023.10170080","DOIUrl":null,"url":null,"abstract":"Sign language is a mechanism that uses hand gestures to facilitate communication between individuals with speaking or hearing impairments. Real-time sign language recognition provides a medium of communication between the general public and those who have difficulty with hearing or speaking. Different kinds of models are developed to provide a feasible solution for this problem. But the traditional models are either expensive or not customizable with limited gestures. In order to address this issue, a model has been developed that can recognize the sign language gestures immediately in real-time. This robust model provides an efficient way to recognize Indian Sign language (ISL) signs dynamically. The dataset is created in a customized manner to include ten phrases that convey comprehensive meaning. The captured data is augmented to identify gestures with different variations. A convolutional neural network has been employed to build the model and perform the multi-class classification on image data. The proposed model recognizes person’s gesture and provides a text output. The results and observations demonstrate that the model identifies a person’s signs accurately and efficiently in real-time. The customized model provides various advantages as new gestures can be added according to the requirement. The improvements suggest various methods that can be leveraged to upgrade the model.","PeriodicalId":403008,"journal":{"name":"2023 4th International Conference for Emerging Technology (INCET)","volume":"66 1-2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Real-time Recognition of Indian Sign Language using OpenCV and Deep Learning\",\"authors\":\"T. Madhumitha, Gudapati Sai Geethika, V. Radhesyam\",\"doi\":\"10.1109/INCET57972.2023.10170080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sign language is a mechanism that uses hand gestures to facilitate communication between individuals with speaking or hearing impairments. Real-time sign language recognition provides a medium of communication between the general public and those who have difficulty with hearing or speaking. Different kinds of models are developed to provide a feasible solution for this problem. But the traditional models are either expensive or not customizable with limited gestures. In order to address this issue, a model has been developed that can recognize the sign language gestures immediately in real-time. This robust model provides an efficient way to recognize Indian Sign language (ISL) signs dynamically. The dataset is created in a customized manner to include ten phrases that convey comprehensive meaning. The captured data is augmented to identify gestures with different variations. A convolutional neural network has been employed to build the model and perform the multi-class classification on image data. The proposed model recognizes person’s gesture and provides a text output. The results and observations demonstrate that the model identifies a person’s signs accurately and efficiently in real-time. The customized model provides various advantages as new gestures can be added according to the requirement. The improvements suggest various methods that can be leveraged to upgrade the model.\",\"PeriodicalId\":403008,\"journal\":{\"name\":\"2023 4th International Conference for Emerging Technology (INCET)\",\"volume\":\"66 1-2\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 4th International Conference for Emerging Technology (INCET)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INCET57972.2023.10170080\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 4th International Conference for Emerging Technology (INCET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INCET57972.2023.10170080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

手语是一种使用手势来促进有语言或听力障碍的人之间交流的机制。实时手语识别为普通大众和有听力或语言障碍的人之间提供了一种沟通媒介。为解决这一问题,提出了不同的模型。但传统型号要么价格昂贵,要么手势有限,无法定制。为了解决这个问题,开发了一个可以即时识别手语手势的模型。该模型为动态识别印度手语提供了一种有效的方法。数据集以定制的方式创建,包含十个表达全面含义的短语。捕获的数据被增强以识别不同变化的手势。采用卷积神经网络建立模型,并对图像数据进行多类分类。提出的模型识别人的手势并提供文本输出。结果和观察表明,该模型能够准确、高效地实时识别人的手势。定制模型提供了各种优势,可以根据需要添加新的手势。这些改进提出了可以用来升级模型的各种方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Real-time Recognition of Indian Sign Language using OpenCV and Deep Learning
Sign language is a mechanism that uses hand gestures to facilitate communication between individuals with speaking or hearing impairments. Real-time sign language recognition provides a medium of communication between the general public and those who have difficulty with hearing or speaking. Different kinds of models are developed to provide a feasible solution for this problem. But the traditional models are either expensive or not customizable with limited gestures. In order to address this issue, a model has been developed that can recognize the sign language gestures immediately in real-time. This robust model provides an efficient way to recognize Indian Sign language (ISL) signs dynamically. The dataset is created in a customized manner to include ten phrases that convey comprehensive meaning. The captured data is augmented to identify gestures with different variations. A convolutional neural network has been employed to build the model and perform the multi-class classification on image data. The proposed model recognizes person’s gesture and provides a text output. The results and observations demonstrate that the model identifies a person’s signs accurately and efficiently in real-time. The customized model provides various advantages as new gestures can be added according to the requirement. The improvements suggest various methods that can be leveraged to upgrade the model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信