Anuj Chavan, Jayesh Bane, Vishrut Chokshi, D. Ambawade
{"title":"Indian Sign Language Recognition Using MobileNet","authors":"Anuj Chavan, Jayesh Bane, Vishrut Chokshi, D. Ambawade","doi":"10.1109/IATMSI56455.2022.10119345","DOIUrl":null,"url":null,"abstract":"Sign Language is one of the most widely used methods of communication by the specially-abled, primarily the hearing and speech impaired. Millions of people in India and around the world use this language made out of gestures daily. They find it difficult to express themselves and understand others without the condition. This paper presents a lightweight application that uses a Convolutional Neural Network (CNN) and the popular Mobile Net Classification Model for recognizing Indian Sign Language using OpenCV. The model is created using the TensorFlow library with the Keras API as the Neural Network model-building framework. The model is then deployed in a python application which shows a user-friendly GUI for people who wish to learn the basic alphanumeric characters of Indian Sign Language (ISL). The application takes in dynamic frames in which one shows the gestures and gets the corresponding alphabet on the same window. This model could be used in schools that require the teaching of basic characters for the specially-abled or those who would like to learn the ISL.","PeriodicalId":221211,"journal":{"name":"2022 IEEE Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI)","volume":"209 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IATMSI56455.2022.10119345","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Sign Language is one of the most widely used methods of communication by the specially-abled, primarily the hearing and speech impaired. Millions of people in India and around the world use this language made out of gestures daily. They find it difficult to express themselves and understand others without the condition. This paper presents a lightweight application that uses a Convolutional Neural Network (CNN) and the popular Mobile Net Classification Model for recognizing Indian Sign Language using OpenCV. The model is created using the TensorFlow library with the Keras API as the Neural Network model-building framework. The model is then deployed in a python application which shows a user-friendly GUI for people who wish to learn the basic alphanumeric characters of Indian Sign Language (ISL). The application takes in dynamic frames in which one shows the gestures and gets the corresponding alphabet on the same window. This model could be used in schools that require the teaching of basic characters for the specially-abled or those who would like to learn the ISL.