一种基于人工智能的手语文本模式预测方法

Bhargav, DN Abhishek, Deekshitha, Skanda Talanki, Sumalatha Aradhya, Thejaswini
{"title":"一种基于人工智能的手语文本模式预测方法","authors":"Bhargav, DN Abhishek, Deekshitha, Skanda Talanki, Sumalatha Aradhya, Thejaswini","doi":"10.1145/3474124.3474210","DOIUrl":null,"url":null,"abstract":"A large social group get benefit from sign language detection through technology, but it is an overlooked concept. Communicating with others in society is a primary aim of learning sign language. Communication between members of this social group is rare due to limited access to technology. Hearing-impaired people are left behind. As normal people cannot make signs, they need to use texting methods to communicate with hearing-impaired people, which is less than ideal. Increasingly, deaf people must be able to communicate naturally no matter the practitioner's knowledge of sign language. An analysis of sign language is based on the patterns of movement generated by the hand or finger, commonly referred to as sign language. The aim of this paper is to recognize sign language gestures using convolutional neural networks. The proposed solution would generate the text pattern from the sign gesture. An RGB camera was used to capture static sign language gestures. Preprocessed images were used to create the cleaned input images. The dataset of sign language gestures was trained and tested on multiple convolutional neural network layers. The trained model recognizes the hand gestures and generates the speech from the text. In addition to outlining the challenges posed by such a problem, it also outlines future opportunities.","PeriodicalId":144611,"journal":{"name":"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)","volume":"202 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An AI based Solution for Predicting the Text Pattern from Sign Language\",\"authors\":\"Bhargav, DN Abhishek, Deekshitha, Skanda Talanki, Sumalatha Aradhya, Thejaswini\",\"doi\":\"10.1145/3474124.3474210\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A large social group get benefit from sign language detection through technology, but it is an overlooked concept. Communicating with others in society is a primary aim of learning sign language. Communication between members of this social group is rare due to limited access to technology. Hearing-impaired people are left behind. As normal people cannot make signs, they need to use texting methods to communicate with hearing-impaired people, which is less than ideal. Increasingly, deaf people must be able to communicate naturally no matter the practitioner's knowledge of sign language. An analysis of sign language is based on the patterns of movement generated by the hand or finger, commonly referred to as sign language. The aim of this paper is to recognize sign language gestures using convolutional neural networks. The proposed solution would generate the text pattern from the sign gesture. An RGB camera was used to capture static sign language gestures. Preprocessed images were used to create the cleaned input images. The dataset of sign language gestures was trained and tested on multiple convolutional neural network layers. The trained model recognizes the hand gestures and generates the speech from the text. In addition to outlining the challenges posed by such a problem, it also outlines future opportunities.\",\"PeriodicalId\":144611,\"journal\":{\"name\":\"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)\",\"volume\":\"202 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3474124.3474210\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3474124.3474210","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

通过手语检测技术,大量的社会群体受益,但这是一个被忽视的概念。在社会中与他人交流是学习手语的主要目的。由于技术的限制,这个社会群体成员之间的交流很少。听障人士被抛在后面。正常人不会做手势,需要用短信的方式与听障人士交流,这是不太理想的。越来越多的聋人必须能够自然地交流,无论从业者的手语知识。对手语的分析是基于手或手指产生的运动模式,通常被称为手语。本文的目的是利用卷积神经网络来识别手语手势。提出的解决方案将从手势生成文本模式。使用RGB相机捕捉静态手语手势。使用预处理图像创建清洗后的输入图像。在多个卷积神经网络层上对手语手势数据集进行训练和测试。经过训练的模型可以识别手势,并从文本中生成语音。除了概述这一问题带来的挑战外,它还概述了未来的机会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An AI based Solution for Predicting the Text Pattern from Sign Language
A large social group get benefit from sign language detection through technology, but it is an overlooked concept. Communicating with others in society is a primary aim of learning sign language. Communication between members of this social group is rare due to limited access to technology. Hearing-impaired people are left behind. As normal people cannot make signs, they need to use texting methods to communicate with hearing-impaired people, which is less than ideal. Increasingly, deaf people must be able to communicate naturally no matter the practitioner's knowledge of sign language. An analysis of sign language is based on the patterns of movement generated by the hand or finger, commonly referred to as sign language. The aim of this paper is to recognize sign language gestures using convolutional neural networks. The proposed solution would generate the text pattern from the sign gesture. An RGB camera was used to capture static sign language gestures. Preprocessed images were used to create the cleaned input images. The dataset of sign language gestures was trained and tested on multiple convolutional neural network layers. The trained model recognizes the hand gestures and generates the speech from the text. In addition to outlining the challenges posed by such a problem, it also outlines future opportunities.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信