{"title":"基于计算机视觉的实时手语识别","authors":"Jinalee Jayeshkumar Raval, Ruchi Gajjar","doi":"10.1109/ICSPC51351.2021.9451709","DOIUrl":null,"url":null,"abstract":"Speech impairment is a disability that affects an individual’s ability to verbal communication. To overcome this issue sign language is used which is one of the most organised languages. There is definitely a need for a method or an application that can recognize sign language gestures so that communication is possible even if someone does not understand sign language. My paper is an effort towards filling the gap between differently-abled people like deaf and dumb and the other people. Image processing combined with machine learning helped in forming a real-time system. Image processing is used for pre-processing the images and extracting different hand from the background. These images obtained after extracting background were used for forming data that contained 24 alphabets of the English language. The Convolutional Neural Network proposed here is tested on both a custom-made dataset and also with real-time hand gestures performed by people of different skin tones. The accuracy obtained by the proposed algorithm is 83%.","PeriodicalId":182885,"journal":{"name":"2021 3rd International Conference on Signal Processing and Communication (ICPSC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Real-time Sign Language Recognition using Computer Vision\",\"authors\":\"Jinalee Jayeshkumar Raval, Ruchi Gajjar\",\"doi\":\"10.1109/ICSPC51351.2021.9451709\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Speech impairment is a disability that affects an individual’s ability to verbal communication. To overcome this issue sign language is used which is one of the most organised languages. There is definitely a need for a method or an application that can recognize sign language gestures so that communication is possible even if someone does not understand sign language. My paper is an effort towards filling the gap between differently-abled people like deaf and dumb and the other people. Image processing combined with machine learning helped in forming a real-time system. Image processing is used for pre-processing the images and extracting different hand from the background. These images obtained after extracting background were used for forming data that contained 24 alphabets of the English language. The Convolutional Neural Network proposed here is tested on both a custom-made dataset and also with real-time hand gestures performed by people of different skin tones. The accuracy obtained by the proposed algorithm is 83%.\",\"PeriodicalId\":182885,\"journal\":{\"name\":\"2021 3rd International Conference on Signal Processing and Communication (ICPSC)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 3rd International Conference on Signal Processing and Communication (ICPSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSPC51351.2021.9451709\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 3rd International Conference on Signal Processing and Communication (ICPSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSPC51351.2021.9451709","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Real-time Sign Language Recognition using Computer Vision
Speech impairment is a disability that affects an individual’s ability to verbal communication. To overcome this issue sign language is used which is one of the most organised languages. There is definitely a need for a method or an application that can recognize sign language gestures so that communication is possible even if someone does not understand sign language. My paper is an effort towards filling the gap between differently-abled people like deaf and dumb and the other people. Image processing combined with machine learning helped in forming a real-time system. Image processing is used for pre-processing the images and extracting different hand from the background. These images obtained after extracting background were used for forming data that contained 24 alphabets of the English language. The Convolutional Neural Network proposed here is tested on both a custom-made dataset and also with real-time hand gestures performed by people of different skin tones. The accuracy obtained by the proposed algorithm is 83%.