Gangadhar Chakali, Ch. Govardhan Reddy, B. Bharathi
{"title":"手语翻译在WebRTC应用","authors":"Gangadhar Chakali, Ch. Govardhan Reddy, B. Bharathi","doi":"10.1109/ICOEI56765.2023.10125915","DOIUrl":null,"url":null,"abstract":"Communication has been an essential part of human life. Different languages are present around the world for communication. Still, people who have lost their hearing and speaking ability by accidents and genetic birth often face difficulty in communication. Auditory-impaired people have found sign language helpful in communicating with others. Hearing-impaired people must always have a personal interpreter available for translations whenever they need to communicate. People with disability of hearing impairment find it challenging to interact with others on social media and the internet to form new relationships on their own. An open-source video-conferencing application that can translate sign language is quite helpful for the hearing impaired. Sign language recognition (SLR) has drawn a lot of attention as a way to close the enormous communication gap. However, sign language is far more complex and unpredictable when compared to other activities, making it challenging for reliable recognition. The Speech-to-Text API enables speech-impaired people who can read to comprehend others. The Sign Language Translation Application (SLTA) allows them to communicate by translating their sign language into text that others can understand. The proposed method uses python, the MediaPipe Framework for gesture data extraction, and the Deep Gesture Recognition (DGR) Model to identify the sign motion in real-time. The proposed method achieves the highest accuracy of 98.81% using a neural network comprised of Long-Short Term Memory units for sequence identification.","PeriodicalId":168942,"journal":{"name":"2023 7th International Conference on Trends in Electronics and Informatics (ICOEI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sign Language Translation in WebRTC Application\",\"authors\":\"Gangadhar Chakali, Ch. Govardhan Reddy, B. Bharathi\",\"doi\":\"10.1109/ICOEI56765.2023.10125915\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Communication has been an essential part of human life. Different languages are present around the world for communication. Still, people who have lost their hearing and speaking ability by accidents and genetic birth often face difficulty in communication. Auditory-impaired people have found sign language helpful in communicating with others. Hearing-impaired people must always have a personal interpreter available for translations whenever they need to communicate. People with disability of hearing impairment find it challenging to interact with others on social media and the internet to form new relationships on their own. An open-source video-conferencing application that can translate sign language is quite helpful for the hearing impaired. Sign language recognition (SLR) has drawn a lot of attention as a way to close the enormous communication gap. However, sign language is far more complex and unpredictable when compared to other activities, making it challenging for reliable recognition. The Speech-to-Text API enables speech-impaired people who can read to comprehend others. The Sign Language Translation Application (SLTA) allows them to communicate by translating their sign language into text that others can understand. The proposed method uses python, the MediaPipe Framework for gesture data extraction, and the Deep Gesture Recognition (DGR) Model to identify the sign motion in real-time. The proposed method achieves the highest accuracy of 98.81% using a neural network comprised of Long-Short Term Memory units for sequence identification.\",\"PeriodicalId\":168942,\"journal\":{\"name\":\"2023 7th International Conference on Trends in Electronics and Informatics (ICOEI)\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 7th International Conference on Trends in Electronics and Informatics (ICOEI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOEI56765.2023.10125915\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 7th International Conference on Trends in Electronics and Informatics (ICOEI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOEI56765.2023.10125915","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Communication has been an essential part of human life. Different languages are present around the world for communication. Still, people who have lost their hearing and speaking ability by accidents and genetic birth often face difficulty in communication. Auditory-impaired people have found sign language helpful in communicating with others. Hearing-impaired people must always have a personal interpreter available for translations whenever they need to communicate. People with disability of hearing impairment find it challenging to interact with others on social media and the internet to form new relationships on their own. An open-source video-conferencing application that can translate sign language is quite helpful for the hearing impaired. Sign language recognition (SLR) has drawn a lot of attention as a way to close the enormous communication gap. However, sign language is far more complex and unpredictable when compared to other activities, making it challenging for reliable recognition. The Speech-to-Text API enables speech-impaired people who can read to comprehend others. The Sign Language Translation Application (SLTA) allows them to communicate by translating their sign language into text that others can understand. The proposed method uses python, the MediaPipe Framework for gesture data extraction, and the Deep Gesture Recognition (DGR) Model to identify the sign motion in real-time. The proposed method achieves the highest accuracy of 98.81% using a neural network comprised of Long-Short Term Memory units for sequence identification.