{"title":"Real Time Hand Sign Language Translation: Text and Speech Conversion","authors":"Yasaswini M, Sanjay S 2, Lokesh U, Arun M. A","doi":"10.55041/ijsrem36998","DOIUrl":null,"url":null,"abstract":"The Sign Language conversion project presents a real-time system that can interpret sign language from a live webcam feed. Leveraging the power of the Media pipe library for landmark detection, the project extracts vital information from each frame, including hand landmarks. The detected landmark coordinates are then collected and stored in a CSV file for further analysis. Using machine learning techniques, a Random Forest Classifier is trained on this landmark data to classify different sign language patterns. During the webcam feed processing, the trained model predicts the sign language class and its probability in real- time. The results are overlaid on the video stream, providing users with immediate insights into the subject's sign language cues. Key Words: Sign language recognition, Hand gesture recognition, Gesture-to-text conversion, Visual language processing.","PeriodicalId":13661,"journal":{"name":"INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT","volume":"51 36","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.55041/ijsrem36998","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The Sign Language conversion project presents a real-time system that can interpret sign language from a live webcam feed. Leveraging the power of the Media pipe library for landmark detection, the project extracts vital information from each frame, including hand landmarks. The detected landmark coordinates are then collected and stored in a CSV file for further analysis. Using machine learning techniques, a Random Forest Classifier is trained on this landmark data to classify different sign language patterns. During the webcam feed processing, the trained model predicts the sign language class and its probability in real- time. The results are overlaid on the video stream, providing users with immediate insights into the subject's sign language cues. Key Words: Sign language recognition, Hand gesture recognition, Gesture-to-text conversion, Visual language processing.