{"title":"Partially occluded facial action recognition and interaction in virtual reality applications","authors":"U. Ciftci, Xing Zhang, Lijun Tin","doi":"10.1109/ICME.2017.8019545","DOIUrl":null,"url":null,"abstract":"The proliferation of affordable virtual reality (VR) head mounted displays (HMD) provides users with realistic immersive visual experiences. However, HMDs occlude upper half of a user's face and prevent the facial action recognition from the entire face. Therefore, entire face cannot be used as a source of feedback for more interactive virtual reality applications. To tackle this problem, we propose a new depth based recognition framework that recognizes mouth gestures and uses those recognized mouth gestures as a medium of interaction within virtual reality in real-time. Our system uses a new 3D edge map approach to describe mouth features, and further classifies those features into seven different gesture classes. The accuracy of the proposed mouth gesture framework is evaluated in user independent tests and achieved high correct recognition rates. The system has also been demonstrated and validated through a real-time virtual reality application.","PeriodicalId":330977,"journal":{"name":"2017 IEEE International Conference on Multimedia and Expo (ICME)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME.2017.8019545","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
The proliferation of affordable virtual reality (VR) head mounted displays (HMD) provides users with realistic immersive visual experiences. However, HMDs occlude upper half of a user's face and prevent the facial action recognition from the entire face. Therefore, entire face cannot be used as a source of feedback for more interactive virtual reality applications. To tackle this problem, we propose a new depth based recognition framework that recognizes mouth gestures and uses those recognized mouth gestures as a medium of interaction within virtual reality in real-time. Our system uses a new 3D edge map approach to describe mouth features, and further classifies those features into seven different gesture classes. The accuracy of the proposed mouth gesture framework is evaluated in user independent tests and achieved high correct recognition rates. The system has also been demonstrated and validated through a real-time virtual reality application.