Shreya Tope, Sadnyani Gomkar, Pukhraj Rathkanthiwar, Aayushi Ganguli, P. Selokar
{"title":"Sign Language Gesture to Speech Conversion Using Convolutional Neural Network","authors":"Shreya Tope, Sadnyani Gomkar, Pukhraj Rathkanthiwar, Aayushi Ganguli, P. Selokar","doi":"10.47164/ijngc.v14i1.999","DOIUrl":null,"url":null,"abstract":"A genuine disability prevents a person from speaking. There are numerous ways for people with this condition to communicate with others, including sign language, which is one of the more widely used forms of communication. Human body language can be used to communicate with one another using sign language, where each word is represented by a specific sequence of gestures.\nThe goal of the paper is to translate human sign language into speech that can interpret human gestures. Through a deep convolution neural network, we first construct the data-set, save the hand gestures in the database, and then use an appropriate model on these hand gesture visuals to test and train the system. When a user launches the application, it then detects the gestures that are saved inthe database and displays the corresponding results. By employing this system, it is possible to assist those who are hard of hearing while simultaneously making communication with them simpler for everyone else.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"42 1","pages":""},"PeriodicalIF":0.3000,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Next-Generation Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.47164/ijngc.v14i1.999","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A genuine disability prevents a person from speaking. There are numerous ways for people with this condition to communicate with others, including sign language, which is one of the more widely used forms of communication. Human body language can be used to communicate with one another using sign language, where each word is represented by a specific sequence of gestures.
The goal of the paper is to translate human sign language into speech that can interpret human gestures. Through a deep convolution neural network, we first construct the data-set, save the hand gestures in the database, and then use an appropriate model on these hand gesture visuals to test and train the system. When a user launches the application, it then detects the gestures that are saved inthe database and displays the corresponding results. By employing this system, it is possible to assist those who are hard of hearing while simultaneously making communication with them simpler for everyone else.