{"title":"American Sign Language Interpreter: A Bridge Between the Two Worlds","authors":"K. Sood, Bhargav Navdiya, Anthony Hernandez","doi":"10.1109/AISC56616.2023.10085066","DOIUrl":null,"url":null,"abstract":"American Sign Language (ASL) is the third most commonly used language after English and Spanish. In this work, we build an image classification modeling technique for ASL to abridge the gap between native ASL speakers including children and others. This paper focuses on providing a sign language recognition system using machine learning. We use four conventional machine learning techniques: K-nearest neighbor, Naive Bayes, Logistic Regression, and Random Forest to detect the alphabets from the images made available using an existing dataset and a new dataset that we generate for this work. Our technique identifies images based on the grayscale values, to identify the same sign in different environments such as images captured in different illuminated environments or hand signs placed at different places compared to the image in the dataset, or hand signs with diverse backgrounds. We use an existing dataset and a real-world dataset that we create independently by generating images using an HP webcam using a computer vision library. We use supervised machine learning and train the classifiers using the labeled image data to predict the ASL signed alphabet in the new image. Our analysis indicates that K-Nearest Neighbor performs best with both datasets achieving up to 99% accuracy.","PeriodicalId":408520,"journal":{"name":"2023 International Conference on Artificial Intelligence and Smart Communication (AISC)","volume":"218 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Artificial Intelligence and Smart Communication (AISC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AISC56616.2023.10085066","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
American Sign Language (ASL) is the third most commonly used language after English and Spanish. In this work, we build an image classification modeling technique for ASL to abridge the gap between native ASL speakers including children and others. This paper focuses on providing a sign language recognition system using machine learning. We use four conventional machine learning techniques: K-nearest neighbor, Naive Bayes, Logistic Regression, and Random Forest to detect the alphabets from the images made available using an existing dataset and a new dataset that we generate for this work. Our technique identifies images based on the grayscale values, to identify the same sign in different environments such as images captured in different illuminated environments or hand signs placed at different places compared to the image in the dataset, or hand signs with diverse backgrounds. We use an existing dataset and a real-world dataset that we create independently by generating images using an HP webcam using a computer vision library. We use supervised machine learning and train the classifiers using the labeled image data to predict the ASL signed alphabet in the new image. Our analysis indicates that K-Nearest Neighbor performs best with both datasets achieving up to 99% accuracy.