{"title":"Two Dimensional (2D) Convolutional Neural Network for Nepali Sign Language Recognition","authors":"Drish Mali, Rubash Mali, Sushila Sipai, S. Panday","doi":"10.1109/SKIMA.2018.8631515","DOIUrl":null,"url":null,"abstract":"Sign language is a basic mode of communication between people who have difficulties in speech and hearing. If computers can detect and distinguish these signs, communication would be easier and dependency on a translator reduces. This paper provides the structure of the system which translates Nepali signs from Nepali Sign Language (NSL) into their respective meaningful words. It captures the static hand gestures and translates the pictures into their corresponding meanings using 2D Convolutional Neural Network. Red glove is used for segmentation purpose. Data set is obtained by manually capturing images using a front camera of a laptop. The system got higher accuracy for the model that recognizes 5 signs than the 7 and the 9 signs model. It also facilitates the users to search for the signs using their corresponding English words. Its core objective is to make easy communication between differently-abled people and who do not understand sign language without involvement of a translator.","PeriodicalId":199576,"journal":{"name":"2018 12th International Conference on Software, Knowledge, Information Management & Applications (SKIMA)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 12th International Conference on Software, Knowledge, Information Management & Applications (SKIMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SKIMA.2018.8631515","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Sign language is a basic mode of communication between people who have difficulties in speech and hearing. If computers can detect and distinguish these signs, communication would be easier and dependency on a translator reduces. This paper provides the structure of the system which translates Nepali signs from Nepali Sign Language (NSL) into their respective meaningful words. It captures the static hand gestures and translates the pictures into their corresponding meanings using 2D Convolutional Neural Network. Red glove is used for segmentation purpose. Data set is obtained by manually capturing images using a front camera of a laptop. The system got higher accuracy for the model that recognizes 5 signs than the 7 and the 9 signs model. It also facilitates the users to search for the signs using their corresponding English words. Its core objective is to make easy communication between differently-abled people and who do not understand sign language without involvement of a translator.