{"title":"Lip Reading using Simple Dynamic Features and a Novel ROI for Feature Extraction","authors":"Abhilash Jain, G. Rathna","doi":"10.1145/3297067.3297083","DOIUrl":null,"url":null,"abstract":"Deaf or hard-of-hearing people mostly rely on lip-reading to understand speech. They demonstrate the ability of humans to understand speech from visual cues only. Automatic lip reading systems work in a similar fashion - by obtaining speech or text from just the visual information, like a video of a person's face. In this paper, an automatic lip reading system for spoken digit recognition is presented. The system uses simple dynamic features by creating difference images between consecutive frames of the video input. Using this technique, word recognition rates of 83.79% and 65.58% are achieved in speaker-dependent and speaker-independent testing scenarios, respectively. A novel, extended region-of-interest (ROI) which includes lower jaw and neck region is also introduced. Most lip-reading algorithms use only the mouth/lip region for relevant feature extraction. Over simple mouth as the ROI, the proposed ROI improves the performance by 4% in speaker-dependent tests and by 11% in speaker-independent tests.","PeriodicalId":340004,"journal":{"name":"International Conference on Signal Processing and Machine Learning","volume":"284 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Signal Processing and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3297067.3297083","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Deaf or hard-of-hearing people mostly rely on lip-reading to understand speech. They demonstrate the ability of humans to understand speech from visual cues only. Automatic lip reading systems work in a similar fashion - by obtaining speech or text from just the visual information, like a video of a person's face. In this paper, an automatic lip reading system for spoken digit recognition is presented. The system uses simple dynamic features by creating difference images between consecutive frames of the video input. Using this technique, word recognition rates of 83.79% and 65.58% are achieved in speaker-dependent and speaker-independent testing scenarios, respectively. A novel, extended region-of-interest (ROI) which includes lower jaw and neck region is also introduced. Most lip-reading algorithms use only the mouth/lip region for relevant feature extraction. Over simple mouth as the ROI, the proposed ROI improves the performance by 4% in speaker-dependent tests and by 11% in speaker-independent tests.