Emotion recognition in Virtual Reality using sensor fusion with eye tracking

IF 6.3 2区 医学 Q1 BIOLOGY
Meral Kuyucu , Mehmet Ali Sarikaya , Tülay Karakaş , Dilek Yıldız Özkan , Yüksel Demir , Ömer Bilen , Gökhan Ince
{"title":"Emotion recognition in Virtual Reality using sensor fusion with eye tracking","authors":"Meral Kuyucu ,&nbsp;Mehmet Ali Sarikaya ,&nbsp;Tülay Karakaş ,&nbsp;Dilek Yıldız Özkan ,&nbsp;Yüksel Demir ,&nbsp;Ömer Bilen ,&nbsp;Gökhan Ince","doi":"10.1016/j.compbiomed.2025.111070","DOIUrl":null,"url":null,"abstract":"<div><div>Emotion recognition is an emerging field with applications in healthcare, education, and entertainment. This study integrates Virtual Reality (VR) with multi-sensor fusion to enhance emotion recognition. The research comprises two phases: data collection and analysis/evaluation. Ninety-five participants were exposed to curated audiovisual stimuli designed to elicit a wide range of emotions within an immersive VR environment. VR was chosen for its ability to provide controlled conditions and overcome the limitations of current mobile sensor technologies. Physiological data streams from various sensors were integrated for comprehensive emotional analysis. ElectroEncephaloGraphy (EEG) data revealed brain activity linked to emotional states, while eye tracking data provided insights into gaze direction, pupil dilation, and eye movement—factors correlated with cognitive and emotional processes. Peripheral signals, including heart rate variability, ElectroDermal Activity (EDA), and body temperature, were captured via wearable sensors to enrich the dataset. Machine learning models, such as XGBoost, CatBoost, Multilayer Perceptron, Gradient Boosting, and LightGBM, were employed to predict participants’ emotional states. Evaluation metrics, including accuracy, precision, recall, and F1 scores, demonstrated the robustness and precision of the proposed VR-based multi-sensor fusion approach. This research presents a novel approach to emotion recognition, bridging gaps in traditional methods by integrating VR, multi-sensor fusion, and machine learning.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"197 ","pages":"Article 111070"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in biology and medicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010482525014222","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Emotion recognition is an emerging field with applications in healthcare, education, and entertainment. This study integrates Virtual Reality (VR) with multi-sensor fusion to enhance emotion recognition. The research comprises two phases: data collection and analysis/evaluation. Ninety-five participants were exposed to curated audiovisual stimuli designed to elicit a wide range of emotions within an immersive VR environment. VR was chosen for its ability to provide controlled conditions and overcome the limitations of current mobile sensor technologies. Physiological data streams from various sensors were integrated for comprehensive emotional analysis. ElectroEncephaloGraphy (EEG) data revealed brain activity linked to emotional states, while eye tracking data provided insights into gaze direction, pupil dilation, and eye movement—factors correlated with cognitive and emotional processes. Peripheral signals, including heart rate variability, ElectroDermal Activity (EDA), and body temperature, were captured via wearable sensors to enrich the dataset. Machine learning models, such as XGBoost, CatBoost, Multilayer Perceptron, Gradient Boosting, and LightGBM, were employed to predict participants’ emotional states. Evaluation metrics, including accuracy, precision, recall, and F1 scores, demonstrated the robustness and precision of the proposed VR-based multi-sensor fusion approach. This research presents a novel approach to emotion recognition, bridging gaps in traditional methods by integrating VR, multi-sensor fusion, and machine learning.
基于传感器融合眼动追踪的虚拟现实情感识别。
情感识别是一个新兴的领域,在医疗保健、教育和娱乐领域都有应用。本研究将虚拟实境(VR)与多感测器融合,以强化情绪辨识。研究包括两个阶段:数据收集和分析/评价。95名参与者暴露在精心策划的视听刺激下,这些刺激旨在在沉浸式虚拟现实环境中引发各种各样的情绪。选择VR是因为它能够提供可控条件,并克服当前移动传感器技术的局限性。来自各种传感器的生理数据流被整合用于全面的情绪分析。脑电图(EEG)数据揭示了与情绪状态相关的大脑活动,而眼动追踪数据提供了与认知和情绪过程相关的凝视方向、瞳孔扩张和眼动因素的见解。外围信号,包括心率变异性、皮肤电活动(EDA)和体温,通过可穿戴传感器捕获,以丰富数据集。使用XGBoost、CatBoost、Multilayer Perceptron、Gradient Boosting和LightGBM等机器学习模型来预测参与者的情绪状态。评估指标,包括准确性、精密度、召回率和F1分数,证明了所提出的基于vr的多传感器融合方法的鲁棒性和精度。本研究提出了一种新的情感识别方法,通过融合VR、多传感器融合和机器学习来弥补传统方法的空白。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers in biology and medicine
Computers in biology and medicine 工程技术-工程:生物医学
CiteScore
11.70
自引率
10.40%
发文量
1086
审稿时长
74 days
期刊介绍: Computers in Biology and Medicine is an international forum for sharing groundbreaking advancements in the use of computers in bioscience and medicine. This journal serves as a medium for communicating essential research, instruction, ideas, and information regarding the rapidly evolving field of computer applications in these domains. By encouraging the exchange of knowledge, we aim to facilitate progress and innovation in the utilization of computers in biology and medicine.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信