J. Ajay, R. Sumathi, K. Arjun, B. Durga Hemanth, K. Nihal Saneen
{"title":"语言障碍手语文本转换的机器学习技术分析","authors":"J. Ajay, R. Sumathi, K. Arjun, B. Durga Hemanth, K. Nihal Saneen","doi":"10.1109/ICCCI56745.2023.10128515","DOIUrl":null,"url":null,"abstract":"Human computer interaction is the research of how individuals and computers interact. When someone does not understand what we are saying, especially when they do not, hand gestures are an excellent way to communicate. It is also a fundamental part of human-computer interaction. It’s essential to comprehend hand signals in order to make sure that everyone in the group understands what the person is trying to say and also that the computer understands what we will be saying. This project’s primary objective is to experiment with various methods for hand gesture recognition. In this project, we use a camera sensor to identify nonverbal communication. Because most individuals do not really know sign language because there are not many interpreters, we first tried to create hand gesture recognition. Then, we built a real-time method for American Sign Language based on deep neural network finger typing, backed again by an approach with media Pipe. We offer a deep cognitive network (CNN) method for identifying human hand gestures in photographs taken with a camera. The objective is to separate camera images from hand motions made during human activity. The training and test data for the CNN were created using skin model, hand location, and orientation information. The filter is the first thing the hand goes through before it is classified according to the sort of hand motion it will make. To build this model, we used computer vision, deep learning, and machine learning. Our Media Pipe model does a good job of detecting multiple gestures","PeriodicalId":205683,"journal":{"name":"2023 International Conference on Computer Communication and Informatics (ICCCI)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analyses of Machine Learning Techniques for Sign Language to Text conversion for Speech Impaired\",\"authors\":\"J. Ajay, R. Sumathi, K. Arjun, B. Durga Hemanth, K. Nihal Saneen\",\"doi\":\"10.1109/ICCCI56745.2023.10128515\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human computer interaction is the research of how individuals and computers interact. When someone does not understand what we are saying, especially when they do not, hand gestures are an excellent way to communicate. It is also a fundamental part of human-computer interaction. It’s essential to comprehend hand signals in order to make sure that everyone in the group understands what the person is trying to say and also that the computer understands what we will be saying. This project’s primary objective is to experiment with various methods for hand gesture recognition. In this project, we use a camera sensor to identify nonverbal communication. Because most individuals do not really know sign language because there are not many interpreters, we first tried to create hand gesture recognition. Then, we built a real-time method for American Sign Language based on deep neural network finger typing, backed again by an approach with media Pipe. We offer a deep cognitive network (CNN) method for identifying human hand gestures in photographs taken with a camera. The objective is to separate camera images from hand motions made during human activity. The training and test data for the CNN were created using skin model, hand location, and orientation information. The filter is the first thing the hand goes through before it is classified according to the sort of hand motion it will make. To build this model, we used computer vision, deep learning, and machine learning. Our Media Pipe model does a good job of detecting multiple gestures\",\"PeriodicalId\":205683,\"journal\":{\"name\":\"2023 International Conference on Computer Communication and Informatics (ICCCI)\",\"volume\":\"74 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Computer Communication and Informatics (ICCCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCI56745.2023.10128515\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Computer Communication and Informatics (ICCCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCI56745.2023.10128515","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analyses of Machine Learning Techniques for Sign Language to Text conversion for Speech Impaired
Human computer interaction is the research of how individuals and computers interact. When someone does not understand what we are saying, especially when they do not, hand gestures are an excellent way to communicate. It is also a fundamental part of human-computer interaction. It’s essential to comprehend hand signals in order to make sure that everyone in the group understands what the person is trying to say and also that the computer understands what we will be saying. This project’s primary objective is to experiment with various methods for hand gesture recognition. In this project, we use a camera sensor to identify nonverbal communication. Because most individuals do not really know sign language because there are not many interpreters, we first tried to create hand gesture recognition. Then, we built a real-time method for American Sign Language based on deep neural network finger typing, backed again by an approach with media Pipe. We offer a deep cognitive network (CNN) method for identifying human hand gestures in photographs taken with a camera. The objective is to separate camera images from hand motions made during human activity. The training and test data for the CNN were created using skin model, hand location, and orientation information. The filter is the first thing the hand goes through before it is classified according to the sort of hand motion it will make. To build this model, we used computer vision, deep learning, and machine learning. Our Media Pipe model does a good job of detecting multiple gestures