Facial electromyogram-based facial gesture recognition for hands-free control of an AR/VR environment: optimal gesture set selection and validation of feasibility as an assistive technology.
Chunghwan Kim, Chaeyoon Kim, HyunSub Kim, HwyKuen Kwak, WooJin Lee, Chang-Hwan Im
{"title":"Facial electromyogram-based facial gesture recognition for hands-free control of an AR/VR environment: optimal gesture set selection and validation of feasibility as an assistive technology.","authors":"Chunghwan Kim, Chaeyoon Kim, HyunSub Kim, HwyKuen Kwak, WooJin Lee, Chang-Hwan Im","doi":"10.1007/s13534-023-00277-9","DOIUrl":null,"url":null,"abstract":"<p><p>The rapid expansion of virtual reality (VR) and augmented reality (AR) into various applications has increased the demand for hands-free input interfaces when traditional control methods are inapplicable (e.g., for paralyzed individuals who cannot move their hands). Facial electromyogram (fEMG), bioelectric signals generated from facial muscles, could solve this problem. Discriminating facial gestures using fEMG is possible because fEMG signals vary with these gestures. Thus, these signals can be used to generate discrete hands-free control commands. This study implemented an fEMG-based facial gesture recognition system for generating discrete commands to control an AR or VR environment. The fEMG signals around the eyes were recorded, assuming that the fEMG electrodes were embedded into the VR head-mounted display (HMD). Sixteen discrete facial gestures were classified using linear discriminant analysis (LDA) with Riemannian geometry features. Because the fEMG electrodes were far from the facial muscles associated with the facial gestures, some similar facial gestures were indistinguishable from each other. Therefore, this study determined the best facial gesture combinations with the highest classification accuracy for 3-15 commands. An analysis of the fEMG data acquired from 15 participants showed that the optimal facial gesture combinations increased the accuracy by 4.7%p compared with randomly selected facial gesture combinations. Moreover, this study is the first to investigate the feasibility of implementing a subject-independent facial gesture recognition system that does not require individual user training sessions. Lastly, our online hands-free control system was successfully applied to a media player to demonstrate the applicability of the proposed system.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s13534-023-00277-9.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"13 3","pages":"465-473"},"PeriodicalIF":3.2000,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10382369/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Engineering Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s13534-023-00277-9","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
The rapid expansion of virtual reality (VR) and augmented reality (AR) into various applications has increased the demand for hands-free input interfaces when traditional control methods are inapplicable (e.g., for paralyzed individuals who cannot move their hands). Facial electromyogram (fEMG), bioelectric signals generated from facial muscles, could solve this problem. Discriminating facial gestures using fEMG is possible because fEMG signals vary with these gestures. Thus, these signals can be used to generate discrete hands-free control commands. This study implemented an fEMG-based facial gesture recognition system for generating discrete commands to control an AR or VR environment. The fEMG signals around the eyes were recorded, assuming that the fEMG electrodes were embedded into the VR head-mounted display (HMD). Sixteen discrete facial gestures were classified using linear discriminant analysis (LDA) with Riemannian geometry features. Because the fEMG electrodes were far from the facial muscles associated with the facial gestures, some similar facial gestures were indistinguishable from each other. Therefore, this study determined the best facial gesture combinations with the highest classification accuracy for 3-15 commands. An analysis of the fEMG data acquired from 15 participants showed that the optimal facial gesture combinations increased the accuracy by 4.7%p compared with randomly selected facial gesture combinations. Moreover, this study is the first to investigate the feasibility of implementing a subject-independent facial gesture recognition system that does not require individual user training sessions. Lastly, our online hands-free control system was successfully applied to a media player to demonstrate the applicability of the proposed system.
Supplementary information: The online version contains supplementary material available at 10.1007/s13534-023-00277-9.
期刊介绍:
Biomedical Engineering Letters (BMEL) aims to present the innovative experimental science and technological development in the biomedical field as well as clinical application of new development. The article must contain original biomedical engineering content, defined as development, theoretical analysis, and evaluation/validation of a new technique. BMEL publishes the following types of papers: original articles, review articles, editorials, and letters to the editor. All the papers are reviewed in single-blind fashion.