{"title":"Hand Landmark Distance Based Sign Language Recognition using MediaPipe","authors":"P. K, Sandesh B.J","doi":"10.1109/ESCI56872.2023.10100061","DOIUrl":null,"url":null,"abstract":"The deaf and hard-of-hearing community uses sign language for communication and interaction with the external world. Sign language recognition has been an active area of research for many years, and there has been progress in both sensor-based and vision-based methods. Sensor-based methods, such as those that use gloves or other wearable devices, have historically been more accurate, but vision-based methods are becoming more prevalent due to their cost-effectiveness. The study aimed to recognize sign language words using hand pictures captured by a web camera. The mediapipe hands method was used to estimate hand landmarks, and features were generated from the distances between the landmarks. Support Vector Machine (SVM) classifiers were used for character and words classification. The study used its own dataset and it compared different scaling factors, including the distances from positions 0 to 17, 5 to 17, and 0 to 12, to determine which one worked best. The best results were found using the palm size distance (o–9). The proposed approach is economically feasible and computationally simple, requiring no specialized equipment.","PeriodicalId":441215,"journal":{"name":"2023 International Conference on Emerging Smart Computing and Informatics (ESCI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Emerging Smart Computing and Informatics (ESCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ESCI56872.2023.10100061","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The deaf and hard-of-hearing community uses sign language for communication and interaction with the external world. Sign language recognition has been an active area of research for many years, and there has been progress in both sensor-based and vision-based methods. Sensor-based methods, such as those that use gloves or other wearable devices, have historically been more accurate, but vision-based methods are becoming more prevalent due to their cost-effectiveness. The study aimed to recognize sign language words using hand pictures captured by a web camera. The mediapipe hands method was used to estimate hand landmarks, and features were generated from the distances between the landmarks. Support Vector Machine (SVM) classifiers were used for character and words classification. The study used its own dataset and it compared different scaling factors, including the distances from positions 0 to 17, 5 to 17, and 0 to 12, to determine which one worked best. The best results were found using the palm size distance (o–9). The proposed approach is economically feasible and computationally simple, requiring no specialized equipment.