{"title":"一种基于人工智能的手语文本模式预测方法","authors":"Bhargav, DN Abhishek, Deekshitha, Skanda Talanki, Sumalatha Aradhya, Thejaswini","doi":"10.1145/3474124.3474210","DOIUrl":null,"url":null,"abstract":"A large social group get benefit from sign language detection through technology, but it is an overlooked concept. Communicating with others in society is a primary aim of learning sign language. Communication between members of this social group is rare due to limited access to technology. Hearing-impaired people are left behind. As normal people cannot make signs, they need to use texting methods to communicate with hearing-impaired people, which is less than ideal. Increasingly, deaf people must be able to communicate naturally no matter the practitioner's knowledge of sign language. An analysis of sign language is based on the patterns of movement generated by the hand or finger, commonly referred to as sign language. The aim of this paper is to recognize sign language gestures using convolutional neural networks. The proposed solution would generate the text pattern from the sign gesture. An RGB camera was used to capture static sign language gestures. Preprocessed images were used to create the cleaned input images. The dataset of sign language gestures was trained and tested on multiple convolutional neural network layers. The trained model recognizes the hand gestures and generates the speech from the text. In addition to outlining the challenges posed by such a problem, it also outlines future opportunities.","PeriodicalId":144611,"journal":{"name":"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)","volume":"202 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An AI based Solution for Predicting the Text Pattern from Sign Language\",\"authors\":\"Bhargav, DN Abhishek, Deekshitha, Skanda Talanki, Sumalatha Aradhya, Thejaswini\",\"doi\":\"10.1145/3474124.3474210\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A large social group get benefit from sign language detection through technology, but it is an overlooked concept. Communicating with others in society is a primary aim of learning sign language. Communication between members of this social group is rare due to limited access to technology. Hearing-impaired people are left behind. As normal people cannot make signs, they need to use texting methods to communicate with hearing-impaired people, which is less than ideal. Increasingly, deaf people must be able to communicate naturally no matter the practitioner's knowledge of sign language. An analysis of sign language is based on the patterns of movement generated by the hand or finger, commonly referred to as sign language. The aim of this paper is to recognize sign language gestures using convolutional neural networks. The proposed solution would generate the text pattern from the sign gesture. An RGB camera was used to capture static sign language gestures. Preprocessed images were used to create the cleaned input images. The dataset of sign language gestures was trained and tested on multiple convolutional neural network layers. The trained model recognizes the hand gestures and generates the speech from the text. In addition to outlining the challenges posed by such a problem, it also outlines future opportunities.\",\"PeriodicalId\":144611,\"journal\":{\"name\":\"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)\",\"volume\":\"202 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3474124.3474210\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3474124.3474210","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An AI based Solution for Predicting the Text Pattern from Sign Language
A large social group get benefit from sign language detection through technology, but it is an overlooked concept. Communicating with others in society is a primary aim of learning sign language. Communication between members of this social group is rare due to limited access to technology. Hearing-impaired people are left behind. As normal people cannot make signs, they need to use texting methods to communicate with hearing-impaired people, which is less than ideal. Increasingly, deaf people must be able to communicate naturally no matter the practitioner's knowledge of sign language. An analysis of sign language is based on the patterns of movement generated by the hand or finger, commonly referred to as sign language. The aim of this paper is to recognize sign language gestures using convolutional neural networks. The proposed solution would generate the text pattern from the sign gesture. An RGB camera was used to capture static sign language gestures. Preprocessed images were used to create the cleaned input images. The dataset of sign language gestures was trained and tested on multiple convolutional neural network layers. The trained model recognizes the hand gestures and generates the speech from the text. In addition to outlining the challenges posed by such a problem, it also outlines future opportunities.