基于面部肌电图的虚拟现实应用情感识别,使用机器学习分类器训练姿势表情。

IF 3.2 4区 医学 Q2 ENGINEERING, BIOMEDICAL
Biomedical Engineering Letters Pub Date : 2025-05-03 eCollection Date: 2025-07-01 DOI:10.1007/s13534-025-00477-5
Jung-Hwan Kim, Ho-Seung Cha, Chang-Hwan Im
{"title":"基于面部肌电图的虚拟现实应用情感识别,使用机器学习分类器训练姿势表情。","authors":"Jung-Hwan Kim, Ho-Seung Cha, Chang-Hwan Im","doi":"10.1007/s13534-025-00477-5","DOIUrl":null,"url":null,"abstract":"<p><p>Recognition of human emotions holds great potential for various daily-life applications. With the increasing interest in virtual reality (VR) technologies, numerous studies have proposed new approaches to integrating emotion recognition into VR environments. However, despite recent advancements, camera-based emotion-recognition technology faces critical limitations due to the physical obstruction caused by head-mounted displays (HMDs). Facial electromyography (fEMG) offers a promising alternative for human emotion-recognition in VR environments, as electrodes can be readily embedded in the padding of commercial HMD devices. However, conventional fEMG-based emotion recognition approaches, although not yet developed for VR applications, require lengthy and tedious calibration sessions. These sessions typically involve collecting fEMG data during the presentation of audio-visual stimuli for eliciting specific emotions. We trained a machine learning classifier using fEMG data acquired while users intentionally made posed facial expressions. This approach simplifies the traditionally time-consuming calibration process, making it less burdensome for users. The proposed method was validated using 20 participants who made posed facial expressions for calibration and then watched emotion-evoking video clips for validation. The results demonstrated the effectiveness of our method in classifying high- and low-valence states, achieving a macro F1-score of 88.20%. This underscores the practicality and efficiency of the proposed method. To the best of our knowledge, this is the first study to successfully build an fEMG-based emotion-recognition model using posed facial expressions. This approach paves the way for developing user-friendly interface technologies in VR-immersive environments.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 4","pages":"773-783"},"PeriodicalIF":3.2000,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12229427/pdf/","citationCount":"0","resultStr":"{\"title\":\"Facial electromyogram-based emotion recognition for virtual reality applications using machine learning classifiers trained on posed expressions.\",\"authors\":\"Jung-Hwan Kim, Ho-Seung Cha, Chang-Hwan Im\",\"doi\":\"10.1007/s13534-025-00477-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recognition of human emotions holds great potential for various daily-life applications. With the increasing interest in virtual reality (VR) technologies, numerous studies have proposed new approaches to integrating emotion recognition into VR environments. However, despite recent advancements, camera-based emotion-recognition technology faces critical limitations due to the physical obstruction caused by head-mounted displays (HMDs). Facial electromyography (fEMG) offers a promising alternative for human emotion-recognition in VR environments, as electrodes can be readily embedded in the padding of commercial HMD devices. However, conventional fEMG-based emotion recognition approaches, although not yet developed for VR applications, require lengthy and tedious calibration sessions. These sessions typically involve collecting fEMG data during the presentation of audio-visual stimuli for eliciting specific emotions. We trained a machine learning classifier using fEMG data acquired while users intentionally made posed facial expressions. This approach simplifies the traditionally time-consuming calibration process, making it less burdensome for users. The proposed method was validated using 20 participants who made posed facial expressions for calibration and then watched emotion-evoking video clips for validation. The results demonstrated the effectiveness of our method in classifying high- and low-valence states, achieving a macro F1-score of 88.20%. This underscores the practicality and efficiency of the proposed method. To the best of our knowledge, this is the first study to successfully build an fEMG-based emotion-recognition model using posed facial expressions. This approach paves the way for developing user-friendly interface technologies in VR-immersive environments.</p>\",\"PeriodicalId\":46898,\"journal\":{\"name\":\"Biomedical Engineering Letters\",\"volume\":\"15 4\",\"pages\":\"773-783\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12229427/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Engineering Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s13534-025-00477-5\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/7/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Engineering Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s13534-025-00477-5","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/7/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

人类情感识别在日常生活中有着巨大的应用潜力。随着人们对虚拟现实技术的兴趣日益浓厚,许多研究提出了将情感识别集成到虚拟现实环境中的新方法。然而,尽管最近取得了进展,但由于头戴式显示器(hmd)造成的物理障碍,基于摄像头的情感识别技术面临着严重的限制。面部肌电图(fEMG)为VR环境中的人类情绪识别提供了一个有前途的替代方案,因为电极可以很容易地嵌入商业头戴设备的填充物中。然而,传统的基于femg的情感识别方法虽然尚未开发用于VR应用,但需要冗长而繁琐的校准过程。这些课程通常包括在呈现视听刺激以引发特定情绪的过程中收集fEMG数据。我们训练了一个机器学习分类器,使用用户故意做出面部表情时获得的fEMG数据。这种方法简化了传统上耗时的校准过程,减少了用户的负担。该方法通过20名参与者的面部表情来验证,然后观看情感唤起的视频片段来验证。结果表明,我们的方法对高价态和低价态的分类是有效的,宏观f1得分为88.20%。这突出了所提出方法的实用性和有效性。据我们所知,这是第一次成功地利用摆姿势的面部表情建立基于femg的情绪识别模型的研究。这种方法为在vr沉浸式环境中开发用户友好的界面技术铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Facial electromyogram-based emotion recognition for virtual reality applications using machine learning classifiers trained on posed expressions.

Recognition of human emotions holds great potential for various daily-life applications. With the increasing interest in virtual reality (VR) technologies, numerous studies have proposed new approaches to integrating emotion recognition into VR environments. However, despite recent advancements, camera-based emotion-recognition technology faces critical limitations due to the physical obstruction caused by head-mounted displays (HMDs). Facial electromyography (fEMG) offers a promising alternative for human emotion-recognition in VR environments, as electrodes can be readily embedded in the padding of commercial HMD devices. However, conventional fEMG-based emotion recognition approaches, although not yet developed for VR applications, require lengthy and tedious calibration sessions. These sessions typically involve collecting fEMG data during the presentation of audio-visual stimuli for eliciting specific emotions. We trained a machine learning classifier using fEMG data acquired while users intentionally made posed facial expressions. This approach simplifies the traditionally time-consuming calibration process, making it less burdensome for users. The proposed method was validated using 20 participants who made posed facial expressions for calibration and then watched emotion-evoking video clips for validation. The results demonstrated the effectiveness of our method in classifying high- and low-valence states, achieving a macro F1-score of 88.20%. This underscores the practicality and efficiency of the proposed method. To the best of our knowledge, this is the first study to successfully build an fEMG-based emotion-recognition model using posed facial expressions. This approach paves the way for developing user-friendly interface technologies in VR-immersive environments.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biomedical Engineering Letters
Biomedical Engineering Letters ENGINEERING, BIOMEDICAL-
CiteScore
6.80
自引率
0.00%
发文量
34
期刊介绍: Biomedical Engineering Letters (BMEL) aims to present the innovative experimental science and technological development in the biomedical field as well as clinical application of new development. The article must contain original biomedical engineering content, defined as development, theoretical analysis, and evaluation/validation of a new technique. BMEL publishes the following types of papers: original articles, review articles, editorials, and letters to the editor. All the papers are reviewed in single-blind fashion.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信