{"title":"基于面部肌电图的虚拟现实应用情感识别,使用机器学习分类器训练姿势表情。","authors":"Jung-Hwan Kim, Ho-Seung Cha, Chang-Hwan Im","doi":"10.1007/s13534-025-00477-5","DOIUrl":null,"url":null,"abstract":"<p><p>Recognition of human emotions holds great potential for various daily-life applications. With the increasing interest in virtual reality (VR) technologies, numerous studies have proposed new approaches to integrating emotion recognition into VR environments. However, despite recent advancements, camera-based emotion-recognition technology faces critical limitations due to the physical obstruction caused by head-mounted displays (HMDs). Facial electromyography (fEMG) offers a promising alternative for human emotion-recognition in VR environments, as electrodes can be readily embedded in the padding of commercial HMD devices. However, conventional fEMG-based emotion recognition approaches, although not yet developed for VR applications, require lengthy and tedious calibration sessions. These sessions typically involve collecting fEMG data during the presentation of audio-visual stimuli for eliciting specific emotions. We trained a machine learning classifier using fEMG data acquired while users intentionally made posed facial expressions. This approach simplifies the traditionally time-consuming calibration process, making it less burdensome for users. The proposed method was validated using 20 participants who made posed facial expressions for calibration and then watched emotion-evoking video clips for validation. The results demonstrated the effectiveness of our method in classifying high- and low-valence states, achieving a macro F1-score of 88.20%. This underscores the practicality and efficiency of the proposed method. To the best of our knowledge, this is the first study to successfully build an fEMG-based emotion-recognition model using posed facial expressions. This approach paves the way for developing user-friendly interface technologies in VR-immersive environments.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 4","pages":"773-783"},"PeriodicalIF":3.2000,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12229427/pdf/","citationCount":"0","resultStr":"{\"title\":\"Facial electromyogram-based emotion recognition for virtual reality applications using machine learning classifiers trained on posed expressions.\",\"authors\":\"Jung-Hwan Kim, Ho-Seung Cha, Chang-Hwan Im\",\"doi\":\"10.1007/s13534-025-00477-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recognition of human emotions holds great potential for various daily-life applications. With the increasing interest in virtual reality (VR) technologies, numerous studies have proposed new approaches to integrating emotion recognition into VR environments. However, despite recent advancements, camera-based emotion-recognition technology faces critical limitations due to the physical obstruction caused by head-mounted displays (HMDs). Facial electromyography (fEMG) offers a promising alternative for human emotion-recognition in VR environments, as electrodes can be readily embedded in the padding of commercial HMD devices. However, conventional fEMG-based emotion recognition approaches, although not yet developed for VR applications, require lengthy and tedious calibration sessions. These sessions typically involve collecting fEMG data during the presentation of audio-visual stimuli for eliciting specific emotions. We trained a machine learning classifier using fEMG data acquired while users intentionally made posed facial expressions. This approach simplifies the traditionally time-consuming calibration process, making it less burdensome for users. The proposed method was validated using 20 participants who made posed facial expressions for calibration and then watched emotion-evoking video clips for validation. The results demonstrated the effectiveness of our method in classifying high- and low-valence states, achieving a macro F1-score of 88.20%. This underscores the practicality and efficiency of the proposed method. To the best of our knowledge, this is the first study to successfully build an fEMG-based emotion-recognition model using posed facial expressions. This approach paves the way for developing user-friendly interface technologies in VR-immersive environments.</p>\",\"PeriodicalId\":46898,\"journal\":{\"name\":\"Biomedical Engineering Letters\",\"volume\":\"15 4\",\"pages\":\"773-783\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12229427/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Engineering Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s13534-025-00477-5\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/7/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Engineering Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s13534-025-00477-5","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/7/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Facial electromyogram-based emotion recognition for virtual reality applications using machine learning classifiers trained on posed expressions.
Recognition of human emotions holds great potential for various daily-life applications. With the increasing interest in virtual reality (VR) technologies, numerous studies have proposed new approaches to integrating emotion recognition into VR environments. However, despite recent advancements, camera-based emotion-recognition technology faces critical limitations due to the physical obstruction caused by head-mounted displays (HMDs). Facial electromyography (fEMG) offers a promising alternative for human emotion-recognition in VR environments, as electrodes can be readily embedded in the padding of commercial HMD devices. However, conventional fEMG-based emotion recognition approaches, although not yet developed for VR applications, require lengthy and tedious calibration sessions. These sessions typically involve collecting fEMG data during the presentation of audio-visual stimuli for eliciting specific emotions. We trained a machine learning classifier using fEMG data acquired while users intentionally made posed facial expressions. This approach simplifies the traditionally time-consuming calibration process, making it less burdensome for users. The proposed method was validated using 20 participants who made posed facial expressions for calibration and then watched emotion-evoking video clips for validation. The results demonstrated the effectiveness of our method in classifying high- and low-valence states, achieving a macro F1-score of 88.20%. This underscores the practicality and efficiency of the proposed method. To the best of our knowledge, this is the first study to successfully build an fEMG-based emotion-recognition model using posed facial expressions. This approach paves the way for developing user-friendly interface technologies in VR-immersive environments.
期刊介绍:
Biomedical Engineering Letters (BMEL) aims to present the innovative experimental science and technological development in the biomedical field as well as clinical application of new development. The article must contain original biomedical engineering content, defined as development, theoretical analysis, and evaluation/validation of a new technique. BMEL publishes the following types of papers: original articles, review articles, editorials, and letters to the editor. All the papers are reviewed in single-blind fashion.