Jianming Zhang, Liang-min Wang, Dejiao Niu, Y. Zhan
{"title":"Research and implementation of a real time approach to lip detection in video sequences","authors":"Jianming Zhang, Liang-min Wang, Dejiao Niu, Y. Zhan","doi":"10.1109/ICMLC.2003.1260027","DOIUrl":null,"url":null,"abstract":"Locating the lip in video sequences is one of the primary steps of the automatic lipreading system. In this paper a new approach to lip detection, which is based on Red Exclusion and Fisher transform, is presented. In this approach, firstly, we locate face region with skin-color model and motion correlation, then trisect the face image and take into account the lowest part, in which the lip lies, for the next processing. Secondly, we exclude R-component in RGB color space, then use G-component and B-component as the Fisher transform vector to enhance the lip image. Finally, in the enhanced image, we adaptively set the threshold to separate the lip color and the skin color in the light of the normal distribution of the gray value histogram. The experimental results showed that this fast approach is very efficient in detecting the whole lip and not affected by illuminant and different speakers.","PeriodicalId":64641,"journal":{"name":"计算机工程","volume":"5 1","pages":"2795-2799 Vol.5"},"PeriodicalIF":0.0000,"publicationDate":"2003-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICMLC.2003.1260027","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"计算机工程","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.1109/ICMLC.2003.1260027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Locating the lip in video sequences is one of the primary steps of the automatic lipreading system. In this paper a new approach to lip detection, which is based on Red Exclusion and Fisher transform, is presented. In this approach, firstly, we locate face region with skin-color model and motion correlation, then trisect the face image and take into account the lowest part, in which the lip lies, for the next processing. Secondly, we exclude R-component in RGB color space, then use G-component and B-component as the Fisher transform vector to enhance the lip image. Finally, in the enhanced image, we adaptively set the threshold to separate the lip color and the skin color in the light of the normal distribution of the gray value histogram. The experimental results showed that this fast approach is very efficient in detecting the whole lip and not affected by illuminant and different speakers.