{"title":"Sign Language Recognition Using Deep Learning","authors":"N. M","doi":"10.1109/CCIP57447.2022.10058655","DOIUrl":null,"url":null,"abstract":"The ability to converse with hearing and deaf persons has always been difficult for those who are tongue-tied. In this paper, we can see different methods which are introduced to help them to communicate effectively. There are many human interpreters or assistant tools to help them communicate, but each person cannot afford that aid. The only mode of communication for them is sign language. Therefore, the project's primary goal is to assist those individuals by providing a system that will recognize the signs, translate them into text, and enable them to lead a normal social life. Previously, a method including hand detection had been developed as a learning tool for novices in sign language. The system was developed using a method based on skin color modeling known as explicit skin-color space thresholding. The specified range of skin tones will distinguish between pixels, or the hand, and non-pixels, or the background. The photos were given as input to a model called the CNN a deep learning algorithm. We will be implementing this project using Keras to train the images. This document provides information on a variety of projects/research on sign language detection in the domains of machine learning, deep learning, and image depth data. This study considers a number of the numerous problems that must be overcome in order to overcome this problem, as well as the future scope.","PeriodicalId":309964,"journal":{"name":"2022 Fourth International Conference on Cognitive Computing and Information Processing (CCIP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Fourth International Conference on Cognitive Computing and Information Processing (CCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCIP57447.2022.10058655","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The ability to converse with hearing and deaf persons has always been difficult for those who are tongue-tied. In this paper, we can see different methods which are introduced to help them to communicate effectively. There are many human interpreters or assistant tools to help them communicate, but each person cannot afford that aid. The only mode of communication for them is sign language. Therefore, the project's primary goal is to assist those individuals by providing a system that will recognize the signs, translate them into text, and enable them to lead a normal social life. Previously, a method including hand detection had been developed as a learning tool for novices in sign language. The system was developed using a method based on skin color modeling known as explicit skin-color space thresholding. The specified range of skin tones will distinguish between pixels, or the hand, and non-pixels, or the background. The photos were given as input to a model called the CNN a deep learning algorithm. We will be implementing this project using Keras to train the images. This document provides information on a variety of projects/research on sign language detection in the domains of machine learning, deep learning, and image depth data. This study considers a number of the numerous problems that must be overcome in order to overcome this problem, as well as the future scope.