Timothy Israel Santos, Andrew Abel, N. Wilson, Yan Xu
{"title":"Speaker-Independent Visual Speech Recognition with the Inception V3 Model","authors":"Timothy Israel Santos, Andrew Abel, N. Wilson, Yan Xu","doi":"10.1109/SLT48900.2021.9383540","DOIUrl":null,"url":null,"abstract":"The natural process of understanding speech involves combining auditory and visual cues. CNN based lip reading systems have become very popular in recent years. However, many of these systems consider lipreading to be a black box problem, with limited detailed performance analysis. In this paper, we performed transfer learning by training the Inception v3 CNN model, which has pre-trained weights produced from IMAGENET, with the GRID corpus, delivering good speech recognition results, with 0.61 precision, 0.53 recall, and 0.51 F1-score. The lip reading model was able to automatically learn pertinent features, demonstrated using visualisation, and achieve speaker-independent results comparable to human lip readers on the GRID corpus. We also identify limitations that match those of humans, therefore limiting potential deep learning performance in real world situations.","PeriodicalId":243211,"journal":{"name":"2021 IEEE Spoken Language Technology Workshop (SLT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT48900.2021.9383540","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
The natural process of understanding speech involves combining auditory and visual cues. CNN based lip reading systems have become very popular in recent years. However, many of these systems consider lipreading to be a black box problem, with limited detailed performance analysis. In this paper, we performed transfer learning by training the Inception v3 CNN model, which has pre-trained weights produced from IMAGENET, with the GRID corpus, delivering good speech recognition results, with 0.61 precision, 0.53 recall, and 0.51 F1-score. The lip reading model was able to automatically learn pertinent features, demonstrated using visualisation, and achieve speaker-independent results comparable to human lip readers on the GRID corpus. We also identify limitations that match those of humans, therefore limiting potential deep learning performance in real world situations.