Nghe-Nhan Truong, Truong-Dong Do, Thien Nguyen, Minh-Thien Duong, Thanh-Hai Nguyen, M. Le
{"title":"A Vision-based Hand-sign Language Teaching System using Deep Neural Network: Methodology and Experiments","authors":"Nghe-Nhan Truong, Truong-Dong Do, Thien Nguyen, Minh-Thien Duong, Thanh-Hai Nguyen, M. Le","doi":"10.1109/IWIS56333.2022.9920883","DOIUrl":null,"url":null,"abstract":"In this paper, a real-time hand-sign language teaching system using deep neural network is proposed. Communication presents a significant barrier for persons who are impaired in hearing and speaking. There are various projects and studies have been conducted to create or improve smart systems for this rapid-growth population. Deep learning approaches became widely used to enhance the accuracy of sign language recognition models. However, most research has primarily concentrated on hand gestures for translation, not language self-learning for long-term development. This work aims to construct a complete system to assist deaf and mute people in studying and examining their performance. First, we designed and built a prosthetic arm equipped with a monocular camera using 3D printing. Second, the MediaPipe library was used to extract key points from collected videos of the hand gestures. Then, the Gated Recurrent Units model is trained to recognize words based on the data. The real-time experimental results demonstrate the system's effectiveness and potential with 97 percent accuracy.","PeriodicalId":340399,"journal":{"name":"2022 International Workshop on Intelligent Systems (IWIS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Workshop on Intelligent Systems (IWIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWIS56333.2022.9920883","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In this paper, a real-time hand-sign language teaching system using deep neural network is proposed. Communication presents a significant barrier for persons who are impaired in hearing and speaking. There are various projects and studies have been conducted to create or improve smart systems for this rapid-growth population. Deep learning approaches became widely used to enhance the accuracy of sign language recognition models. However, most research has primarily concentrated on hand gestures for translation, not language self-learning for long-term development. This work aims to construct a complete system to assist deaf and mute people in studying and examining their performance. First, we designed and built a prosthetic arm equipped with a monocular camera using 3D printing. Second, the MediaPipe library was used to extract key points from collected videos of the hand gestures. Then, the Gated Recurrent Units model is trained to recognize words based on the data. The real-time experimental results demonstrate the system's effectiveness and potential with 97 percent accuracy.