{"title":"A Multi-Modular Approach for Sign Language and Speech Recognition for Deaf-Mute People","authors":"D. Dahanayaka, B. Madhusanka, I. U. Atthanayake","doi":"10.4038/engineer.v54i4.7474","DOIUrl":null,"url":null,"abstract":"Deaf and Mute people cannot communicate efficiently to express their feelings to ordinary people. The common method these people use for communication is the sign language. But these sign languages are not very familiar to ordinary people. Therefore, effective communication between deaf and mute people and ordinary people is seriously affected. This paper presents the development of an Android mobile application to translate sign language into speech-language for ordinary people, and speech into text for deaf and mute people using Convolution Neural Network (CNN). The study focuses on vision-based Sign Language Recognition (SLR) and Automatic Speech Recognition (ASR) mobile application. The main challenging tasks were audio classification and image classification. Therefore, CNN was used to train audio clips and images. Mel-frequency Cepstral Coefficient (MFCC) approach was used for ASR. The mobile application was developed by Python programming and Android Studio. After developing the application, testing was done for letters A and C, and these letters were identified with 95% accuracy.","PeriodicalId":42812,"journal":{"name":"Engineer-Journal of the Institution of Engineers Sri Lanka","volume":"12 1","pages":""},"PeriodicalIF":0.4000,"publicationDate":"2021-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineer-Journal of the Institution of Engineers Sri Lanka","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4038/engineer.v54i4.7474","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 3
Abstract
Deaf and Mute people cannot communicate efficiently to express their feelings to ordinary people. The common method these people use for communication is the sign language. But these sign languages are not very familiar to ordinary people. Therefore, effective communication between deaf and mute people and ordinary people is seriously affected. This paper presents the development of an Android mobile application to translate sign language into speech-language for ordinary people, and speech into text for deaf and mute people using Convolution Neural Network (CNN). The study focuses on vision-based Sign Language Recognition (SLR) and Automatic Speech Recognition (ASR) mobile application. The main challenging tasks were audio classification and image classification. Therefore, CNN was used to train audio clips and images. Mel-frequency Cepstral Coefficient (MFCC) approach was used for ASR. The mobile application was developed by Python programming and Android Studio. After developing the application, testing was done for letters A and C, and these letters were identified with 95% accuracy.