Mohammed Asif, Sameer Shrikhande, Hardik Pingale, Abhishek Joshi, Prof. Priyanka Sonawane
{"title":"HAND SIGN LANGUAGE RECOGNITION USING AUGMENTED REALITY & MACHINE LEARNING","authors":"Mohammed Asif, Sameer Shrikhande, Hardik Pingale, Abhishek Joshi, Prof. Priyanka Sonawane","doi":"10.36713/epra16515","DOIUrl":null,"url":null,"abstract":"Effective communication is a cornerstone of human interaction, fostering societal cohesion and development. Throughout history, communication has evolved from primitive drawings to complex languages, shaping our societys fabric. However, amidst this progression, individuals with speech and hearing impairments have often faced significant challenges in communication. Despite constituting a minority, their needs are paramount and must not be overlooked. Recognizing the diverse classification of languages into verbal and non-verbal forms, it becomes evident that non-verbal languages play a crucial role, especially for Individuals with Hearing and Speech Impairments (IWSHI). These individuals rely on non-verbal communication methods to interact with the world around them, yet they often face barriers due to the lack of understanding and accessibility. To address this challenge, the HSLR app serves as a transformative tool, enabling IWSHI to communicate confidently. Leveraging technologies such as Augmented Reality (AR) and Machine Learning (ML), our app facilitates real-time recognition of hand signs, providing instantaneous translations for seamless communication. Additionally, the integration of AR technology enhances the user experience, offering immersive and interactive sign-language communication platforms. The MediaPipe model used in real-time achieves high accuracy in recognizing sign language due to the ample dataset we provided. \nKEY WORDS: Hand Sign Language Recognition (HSLR), Augmented Reality (AR), Machine Learning (ML), American Sign Language (ASL), Computer Vision, MediaPipe","PeriodicalId":114964,"journal":{"name":"EPRA International Journal of Research & Development (IJRD)","volume":"118 42","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EPRA International Journal of Research & Development (IJRD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.36713/epra16515","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Effective communication is a cornerstone of human interaction, fostering societal cohesion and development. Throughout history, communication has evolved from primitive drawings to complex languages, shaping our societys fabric. However, amidst this progression, individuals with speech and hearing impairments have often faced significant challenges in communication. Despite constituting a minority, their needs are paramount and must not be overlooked. Recognizing the diverse classification of languages into verbal and non-verbal forms, it becomes evident that non-verbal languages play a crucial role, especially for Individuals with Hearing and Speech Impairments (IWSHI). These individuals rely on non-verbal communication methods to interact with the world around them, yet they often face barriers due to the lack of understanding and accessibility. To address this challenge, the HSLR app serves as a transformative tool, enabling IWSHI to communicate confidently. Leveraging technologies such as Augmented Reality (AR) and Machine Learning (ML), our app facilitates real-time recognition of hand signs, providing instantaneous translations for seamless communication. Additionally, the integration of AR technology enhances the user experience, offering immersive and interactive sign-language communication platforms. The MediaPipe model used in real-time achieves high accuracy in recognizing sign language due to the ample dataset we provided.
KEY WORDS: Hand Sign Language Recognition (HSLR), Augmented Reality (AR), Machine Learning (ML), American Sign Language (ASL), Computer Vision, MediaPipe