{"title":"Computer vision based approach for Indian Sign Language character recognition","authors":"R. K. Shangeetha, V. Valliammai, S. Padmavathi","doi":"10.1109/MVIP.2012.6428790","DOIUrl":null,"url":null,"abstract":"Deaf and dumb people communicate among themselves using sign languages, but they find it difficult to expose themselves to the outside world. This paper proposes a method to convert the Indian Sign Language (ISL) hand gestures into appropriate text message. In this paper the hand gestures corresponding to ISL English alphabets are captured through a webcam. In the captured frames the hand is segmented and the state of fingers is used to recognize the alphabet. The features such as angle made between fingers, number of fingers that are fully opened, fully closed or semi closed and identification of each finger are used for recognition. Experimentation done for single hand alphabets and the results are summarised.","PeriodicalId":170271,"journal":{"name":"2012 International Conference on Machine Vision and Image Processing (MVIP)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"30","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 International Conference on Machine Vision and Image Processing (MVIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MVIP.2012.6428790","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 30
Abstract
Deaf and dumb people communicate among themselves using sign languages, but they find it difficult to expose themselves to the outside world. This paper proposes a method to convert the Indian Sign Language (ISL) hand gestures into appropriate text message. In this paper the hand gestures corresponding to ISL English alphabets are captured through a webcam. In the captured frames the hand is segmented and the state of fingers is used to recognize the alphabet. The features such as angle made between fingers, number of fingers that are fully opened, fully closed or semi closed and identification of each finger are used for recognition. Experimentation done for single hand alphabets and the results are summarised.