Gesture based Real-Time Sign Language Recognition System

Tiya Ann Siby, Sonam Pal, Jessica Arlina, S. Nagaraju
{"title":"Gesture based Real-Time Sign Language Recognition System","authors":"Tiya Ann Siby, Sonam Pal, Jessica Arlina, S. Nagaraju","doi":"10.1109/CSI54720.2022.9924024","DOIUrl":null,"url":null,"abstract":"Real-Time Sign Language Recognition (RTSLG) can help people express clearer thoughts, speak in shorter sentences, and be more expressive to use declarative language. Hand gestures provide a wealth of information that persons with disabilities can use to communicate in a fundamental way and to complement communication for others. Since the hand gesture information is based on movement sequences, accurately detecting hand gestures in real-time is difficult. Hearing-impaired persons have difficulty interacting with others, resulting in a communication gap. The only way for them to communicate their ideas and feelings is to use hand signals, which are not understood by many people. As a result, in recent days, the hand gesture detection system has gained prominence. In this paper, the proposed design is of a deep learning model using Python, TensorFlow, OpenCV and Histogram Equalization that can be accessed from the web browser. The proposed RTSLG system uses image detection, computer vision, and neural network methodologies i.e. Convolution Neural Network to recognise the characteristics of the hand in video filmed by a web camera. To enhance the details of the images, an image processing technique called Histogram Equalization is performed. The accuracy obtained by the proposed system is 87.8%. Once the gesture is recognized and text output is displayed, the proposed RTSLG system makes use of gTTS (Google Text-to-Speech) library in order to convert the displayed text to audio for assisting the communication of speech and hearing-impaired person.","PeriodicalId":221137,"journal":{"name":"2022 International Conference on Connected Systems & Intelligence (CSI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Connected Systems & Intelligence (CSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSI54720.2022.9924024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Real-Time Sign Language Recognition (RTSLG) can help people express clearer thoughts, speak in shorter sentences, and be more expressive to use declarative language. Hand gestures provide a wealth of information that persons with disabilities can use to communicate in a fundamental way and to complement communication for others. Since the hand gesture information is based on movement sequences, accurately detecting hand gestures in real-time is difficult. Hearing-impaired persons have difficulty interacting with others, resulting in a communication gap. The only way for them to communicate their ideas and feelings is to use hand signals, which are not understood by many people. As a result, in recent days, the hand gesture detection system has gained prominence. In this paper, the proposed design is of a deep learning model using Python, TensorFlow, OpenCV and Histogram Equalization that can be accessed from the web browser. The proposed RTSLG system uses image detection, computer vision, and neural network methodologies i.e. Convolution Neural Network to recognise the characteristics of the hand in video filmed by a web camera. To enhance the details of the images, an image processing technique called Histogram Equalization is performed. The accuracy obtained by the proposed system is 87.8%. Once the gesture is recognized and text output is displayed, the proposed RTSLG system makes use of gTTS (Google Text-to-Speech) library in order to convert the displayed text to audio for assisting the communication of speech and hearing-impaired person.
基于手势的实时手语识别系统
实时手语识别(RTSLG)可以帮助人们表达更清晰的思想,用更短的句子说话,并且更善于使用陈述性语言。手势提供了丰富的信息,残疾人可以利用这些信息进行基本的交流,并补充他人的交流。由于手势信息是基于动作序列的,因此很难实时准确地检测手势。听力受损的人与他人交流有困难,导致沟通障碍。他们表达想法和感受的唯一方式就是用手势,而很多人都不懂手势。因此,最近几天,手势检测系统得到了重视。在本文中,提出的设计是一个使用Python, TensorFlow, OpenCV和直方图均衡化的深度学习模型,可以从web浏览器访问。提出的RTSLG系统使用图像检测、计算机视觉和神经网络方法,即卷积神经网络来识别由网络摄像机拍摄的视频中的手的特征。为了增强图像的细节,执行了一种称为直方图均衡化的图像处理技术。该系统的精度为87.8%。一旦手势被识别并显示文本输出,所提出的RTSLG系统利用gTTS (Google text -to- speech)库将显示的文本转换为音频,以帮助言语和听力受损人士进行交流。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信