语言障碍手语文本转换的机器学习技术分析

J. Ajay, R. Sumathi, K. Arjun, B. Durga Hemanth, K. Nihal Saneen
{"title":"语言障碍手语文本转换的机器学习技术分析","authors":"J. Ajay, R. Sumathi, K. Arjun, B. Durga Hemanth, K. Nihal Saneen","doi":"10.1109/ICCCI56745.2023.10128515","DOIUrl":null,"url":null,"abstract":"Human computer interaction is the research of how individuals and computers interact. When someone does not understand what we are saying, especially when they do not, hand gestures are an excellent way to communicate. It is also a fundamental part of human-computer interaction. It’s essential to comprehend hand signals in order to make sure that everyone in the group understands what the person is trying to say and also that the computer understands what we will be saying. This project’s primary objective is to experiment with various methods for hand gesture recognition. In this project, we use a camera sensor to identify nonverbal communication. Because most individuals do not really know sign language because there are not many interpreters, we first tried to create hand gesture recognition. Then, we built a real-time method for American Sign Language based on deep neural network finger typing, backed again by an approach with media Pipe. We offer a deep cognitive network (CNN) method for identifying human hand gestures in photographs taken with a camera. The objective is to separate camera images from hand motions made during human activity. The training and test data for the CNN were created using skin model, hand location, and orientation information. The filter is the first thing the hand goes through before it is classified according to the sort of hand motion it will make. To build this model, we used computer vision, deep learning, and machine learning. Our Media Pipe model does a good job of detecting multiple gestures","PeriodicalId":205683,"journal":{"name":"2023 International Conference on Computer Communication and Informatics (ICCCI)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analyses of Machine Learning Techniques for Sign Language to Text conversion for Speech Impaired\",\"authors\":\"J. Ajay, R. Sumathi, K. Arjun, B. Durga Hemanth, K. Nihal Saneen\",\"doi\":\"10.1109/ICCCI56745.2023.10128515\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human computer interaction is the research of how individuals and computers interact. When someone does not understand what we are saying, especially when they do not, hand gestures are an excellent way to communicate. It is also a fundamental part of human-computer interaction. It’s essential to comprehend hand signals in order to make sure that everyone in the group understands what the person is trying to say and also that the computer understands what we will be saying. This project’s primary objective is to experiment with various methods for hand gesture recognition. In this project, we use a camera sensor to identify nonverbal communication. Because most individuals do not really know sign language because there are not many interpreters, we first tried to create hand gesture recognition. Then, we built a real-time method for American Sign Language based on deep neural network finger typing, backed again by an approach with media Pipe. We offer a deep cognitive network (CNN) method for identifying human hand gestures in photographs taken with a camera. The objective is to separate camera images from hand motions made during human activity. The training and test data for the CNN were created using skin model, hand location, and orientation information. The filter is the first thing the hand goes through before it is classified according to the sort of hand motion it will make. To build this model, we used computer vision, deep learning, and machine learning. Our Media Pipe model does a good job of detecting multiple gestures\",\"PeriodicalId\":205683,\"journal\":{\"name\":\"2023 International Conference on Computer Communication and Informatics (ICCCI)\",\"volume\":\"74 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Computer Communication and Informatics (ICCCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCI56745.2023.10128515\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Computer Communication and Informatics (ICCCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCI56745.2023.10128515","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人机交互是对个人和计算机如何交互的研究。当有人不明白我们在说什么的时候,尤其是当他们不明白的时候,手势是一种很好的交流方式。它也是人机交互的基本组成部分。理解手势是很重要的,这样才能确保小组中的每个人都明白这个人想说什么,同时电脑也能理解我们要说什么。这个项目的主要目标是实验各种手势识别方法。在这个项目中,我们使用一个相机传感器来识别非语言交流。因为大多数人并不真正了解手语,因为没有很多翻译,我们首先尝试创建手势识别。在此基础上,我们构建了一种基于深度神经网络手指输入的美国手语实时识别方法。我们提供了一种深度认知网络(CNN)方法来识别用相机拍摄的照片中的人类手势。目的是将相机图像与人类活动时的手部动作分离开来。CNN的训练和测试数据是使用皮肤模型、手的位置和方向信息创建的。过滤器是手要经过的第一件事,然后才能根据手的动作进行分类。为了建立这个模型,我们使用了计算机视觉、深度学习和机器学习。我们的Media Pipe模型在检测多个手势方面做得很好
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Analyses of Machine Learning Techniques for Sign Language to Text conversion for Speech Impaired
Human computer interaction is the research of how individuals and computers interact. When someone does not understand what we are saying, especially when they do not, hand gestures are an excellent way to communicate. It is also a fundamental part of human-computer interaction. It’s essential to comprehend hand signals in order to make sure that everyone in the group understands what the person is trying to say and also that the computer understands what we will be saying. This project’s primary objective is to experiment with various methods for hand gesture recognition. In this project, we use a camera sensor to identify nonverbal communication. Because most individuals do not really know sign language because there are not many interpreters, we first tried to create hand gesture recognition. Then, we built a real-time method for American Sign Language based on deep neural network finger typing, backed again by an approach with media Pipe. We offer a deep cognitive network (CNN) method for identifying human hand gestures in photographs taken with a camera. The objective is to separate camera images from hand motions made during human activity. The training and test data for the CNN were created using skin model, hand location, and orientation information. The filter is the first thing the hand goes through before it is classified according to the sort of hand motion it will make. To build this model, we used computer vision, deep learning, and machine learning. Our Media Pipe model does a good job of detecting multiple gestures
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信