K. S. Vikash, Kaavya Jayakrishnan, Siddharth Ramanathan, G. Rohith, Vijayendra Hanumara
{"title":"An approach to Generation of sentences using Sign Language Detection","authors":"K. S. Vikash, Kaavya Jayakrishnan, Siddharth Ramanathan, G. Rohith, Vijayendra Hanumara","doi":"10.1109/IConSCEPT57958.2023.10170218","DOIUrl":null,"url":null,"abstract":"Deaf and mute people use sign language naturally. This article provides an application that addresses the problem of sign language detection by using computer vision and machine learning. The proposed system is a sign language interpreter that recognizes and understands the sign language words. These detected words and phrases are placed together as a sentence, enabling the user to get a complete translation. The system will collect video of a signer using a camera, and computer vision algorithms to recognize hand motions and movements. The user’s dominant hand (left or right) will conduct most of this activity. Single Shot MultiBox Detector (SSD) MobileNet V2 Deep learning technique is used to recognize the hand motions and movements and convert the identified signs into text output. The system will be trained on a dataset of sign language phrases, and its accuracy will be assessed using a range of performance indicators. The suggested technique is 96% accurate in identifying the type of sign language and 100% accurate in translating it to interpretation.","PeriodicalId":240167,"journal":{"name":"2023 International Conference on Signal Processing, Computation, Electronics, Power and Telecommunication (IConSCEPT)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Signal Processing, Computation, Electronics, Power and Telecommunication (IConSCEPT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IConSCEPT57958.2023.10170218","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deaf and mute people use sign language naturally. This article provides an application that addresses the problem of sign language detection by using computer vision and machine learning. The proposed system is a sign language interpreter that recognizes and understands the sign language words. These detected words and phrases are placed together as a sentence, enabling the user to get a complete translation. The system will collect video of a signer using a camera, and computer vision algorithms to recognize hand motions and movements. The user’s dominant hand (left or right) will conduct most of this activity. Single Shot MultiBox Detector (SSD) MobileNet V2 Deep learning technique is used to recognize the hand motions and movements and convert the identified signs into text output. The system will be trained on a dataset of sign language phrases, and its accuracy will be assessed using a range of performance indicators. The suggested technique is 96% accurate in identifying the type of sign language and 100% accurate in translating it to interpretation.