{"title":"实时手语转换为哑巴和聋人","authors":"Akshit J. Dhruv, Santosh Kumar Bharti","doi":"10.1109/aimv53313.2021.9670928","DOIUrl":null,"url":null,"abstract":"Deaf people may get irritated due to the problem of not being able to share their views with common people, which may affect their day-to-day life. This is the main reason to develop such system that can help these people and they can also put their thoughts forward similar to other people who don’t have such problem. The advancement in the Artificial intelligence provides the door for developing the system that overcome this difficulty. So this project aims on developing a system which will be able to convert the speech to text for the deaf person, and also sometimes the person might not be able to understand just by text, so the speech will also get converted to the universal sign language. Similarly, for the mute people the sign language which they are using will get converted to speech. We will take help of various ML and AI concepts along with NLP to develop the accurate model. Convolutional neural networks (CNN) will be used for prediction as it is efficient in predicting image input, also as lip movements are fast and continuous so it is hard to capture so along with CNN, the use of attention-based long short-term memory (LSTM) will prove to be efficient. Data Augmentation methods will be used for getting the better results. TensorFlow and Keras are the python libraries that will be used to convert the speech to text. Currently there are many software available but all requires the network connectivity for it to work, while this device will work without the requirement of internet.Using the proposed model we got the accuracy of 100% in predicting sign language and 96% accuracy in sentence level understanding.","PeriodicalId":135318,"journal":{"name":"2021 International Conference on Artificial Intelligence and Machine Vision (AIMV)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Real-Time Sign Language Converter for Mute and Deaf People\",\"authors\":\"Akshit J. Dhruv, Santosh Kumar Bharti\",\"doi\":\"10.1109/aimv53313.2021.9670928\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deaf people may get irritated due to the problem of not being able to share their views with common people, which may affect their day-to-day life. This is the main reason to develop such system that can help these people and they can also put their thoughts forward similar to other people who don’t have such problem. The advancement in the Artificial intelligence provides the door for developing the system that overcome this difficulty. So this project aims on developing a system which will be able to convert the speech to text for the deaf person, and also sometimes the person might not be able to understand just by text, so the speech will also get converted to the universal sign language. Similarly, for the mute people the sign language which they are using will get converted to speech. We will take help of various ML and AI concepts along with NLP to develop the accurate model. Convolutional neural networks (CNN) will be used for prediction as it is efficient in predicting image input, also as lip movements are fast and continuous so it is hard to capture so along with CNN, the use of attention-based long short-term memory (LSTM) will prove to be efficient. Data Augmentation methods will be used for getting the better results. TensorFlow and Keras are the python libraries that will be used to convert the speech to text. Currently there are many software available but all requires the network connectivity for it to work, while this device will work without the requirement of internet.Using the proposed model we got the accuracy of 100% in predicting sign language and 96% accuracy in sentence level understanding.\",\"PeriodicalId\":135318,\"journal\":{\"name\":\"2021 International Conference on Artificial Intelligence and Machine Vision (AIMV)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Artificial Intelligence and Machine Vision (AIMV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/aimv53313.2021.9670928\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Artificial Intelligence and Machine Vision (AIMV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/aimv53313.2021.9670928","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Real-Time Sign Language Converter for Mute and Deaf People
Deaf people may get irritated due to the problem of not being able to share their views with common people, which may affect their day-to-day life. This is the main reason to develop such system that can help these people and they can also put their thoughts forward similar to other people who don’t have such problem. The advancement in the Artificial intelligence provides the door for developing the system that overcome this difficulty. So this project aims on developing a system which will be able to convert the speech to text for the deaf person, and also sometimes the person might not be able to understand just by text, so the speech will also get converted to the universal sign language. Similarly, for the mute people the sign language which they are using will get converted to speech. We will take help of various ML and AI concepts along with NLP to develop the accurate model. Convolutional neural networks (CNN) will be used for prediction as it is efficient in predicting image input, also as lip movements are fast and continuous so it is hard to capture so along with CNN, the use of attention-based long short-term memory (LSTM) will prove to be efficient. Data Augmentation methods will be used for getting the better results. TensorFlow and Keras are the python libraries that will be used to convert the speech to text. Currently there are many software available but all requires the network connectivity for it to work, while this device will work without the requirement of internet.Using the proposed model we got the accuracy of 100% in predicting sign language and 96% accuracy in sentence level understanding.