{"title":"TLRNet: Tibetan Lip Reading Based on ResNet and BiGRU","authors":"Zhenye Gan, Xu Ding, Xinke Yu, Zhenxing Kong","doi":"10.1109/EPCE58798.2023.00048","DOIUrl":null,"url":null,"abstract":"Lip reading, also known as visual speech recognition, is a way of human-computer interaction based on visual information. At present, the research on lip reading mainly focuses on English and Mandarin Chinese, and there are relatively few studies on Tibetan, a low-resource minority language. Therefore, the present study proposes a specific deep learning model named the TLRNet for word-level visual speech recognition for Tibetan. The model comprises the ResNet-18 architecture, which is a residual neural network, and the BiGRU layer, a bi-directional gated recurrent unit. We train and evaluate it on the TLRW-50 dataset, which consists of fifty common Tibetan words. Our proposed model achieves Top-1 and Top-5 classification accuracies of 41.82% and 59.37%, respectively, demonstrating its potential effectiveness in recognizing Tibetan spoken words based on visual cues.","PeriodicalId":355442,"journal":{"name":"2023 2nd Asia Conference on Electrical, Power and Computer Engineering (EPCE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 2nd Asia Conference on Electrical, Power and Computer Engineering (EPCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EPCE58798.2023.00048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Lip reading, also known as visual speech recognition, is a way of human-computer interaction based on visual information. At present, the research on lip reading mainly focuses on English and Mandarin Chinese, and there are relatively few studies on Tibetan, a low-resource minority language. Therefore, the present study proposes a specific deep learning model named the TLRNet for word-level visual speech recognition for Tibetan. The model comprises the ResNet-18 architecture, which is a residual neural network, and the BiGRU layer, a bi-directional gated recurrent unit. We train and evaluate it on the TLRW-50 dataset, which consists of fifty common Tibetan words. Our proposed model achieves Top-1 and Top-5 classification accuracies of 41.82% and 59.37%, respectively, demonstrating its potential effectiveness in recognizing Tibetan spoken words based on visual cues.