一项使用机器学习的试点研究表明,在使用智能手机时,可以检测到压力。

IF 3.2 Q1 HEALTH CARE SCIENCES & SERVICES
Frontiers in digital health Pub Date : 2025-04-30 eCollection Date: 2025-01-01 DOI:10.3389/fdgth.2025.1578917
Lydia Helene Rupp, Akash Kumar, Misha Sadeghi, Lena Schindler-Gmelch, Marie Keinert, Bjoern M Eskofier, Matthias Berking
{"title":"一项使用机器学习的试点研究表明,在使用智能手机时,可以检测到压力。","authors":"Lydia Helene Rupp, Akash Kumar, Misha Sadeghi, Lena Schindler-Gmelch, Marie Keinert, Bjoern M Eskofier, Matthias Berking","doi":"10.3389/fdgth.2025.1578917","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>The detrimental consequences of stress highlight the need for precise stress detection, as this offers a window for timely intervention. However, both objective and subjective measurements suffer from validity limitations. Contactless sensing technologies using machine learning methods present a potential alternative and could be used to estimate stress from externally visible physiological changes, such as emotional facial expressions. Although previous studies were able to classify stress from emotional expressions with accuracies of up to 88.32%, most works employed a classification approach and relied on data from contexts where stress was induced. Therefore, the primary aim of the present study was to clarify whether stress can be detected from facial expressions of six basic emotions (anxiety, anger, disgust, sadness, joy, love) and relaxation using a prediction approach.</p><p><strong>Method: </strong>To attain this goal, we analyzed video recordings of facial emotional expressions collected from n = 69 participants in a secondary analysis of a dataset from an interventional study. We aimed to explore associations with stress (assessed by the PSS-10 and a one-item stress measure).</p><p><strong>Results: </strong>Comparing two regression machine learning models [Random Forest (RF) and XGBoost], we found that facial emotional expressions were promising indicators of stress scores, with model fit being best when data from all six emotional facial expressions was used to train the model (one-item stress measure: MSE (XGB) = 2.31, MAE (XGB) = 1.32, MSE (RF) = 3.86, MAE (RF) = 1.69; PSS-10: MSE (XGB) = 25.65, MAE (XGB) = 4.16, MSE (RF) = 26.32, MAE (RF) = 4.14). XGBoost showed to be more reliable for prediction, with lower error for both training and test data.</p><p><strong>Discussion: </strong>The findings provide further evidence that non-invasive video recordings can complement standard objective and subjective markers of stress.</p>","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":"7 ","pages":"1578917"},"PeriodicalIF":3.2000,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12075543/pdf/","citationCount":"0","resultStr":"{\"title\":\"Stress can be detected during emotion-evoking smartphone use: a pilot study using machine learning.\",\"authors\":\"Lydia Helene Rupp, Akash Kumar, Misha Sadeghi, Lena Schindler-Gmelch, Marie Keinert, Bjoern M Eskofier, Matthias Berking\",\"doi\":\"10.3389/fdgth.2025.1578917\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>The detrimental consequences of stress highlight the need for precise stress detection, as this offers a window for timely intervention. However, both objective and subjective measurements suffer from validity limitations. Contactless sensing technologies using machine learning methods present a potential alternative and could be used to estimate stress from externally visible physiological changes, such as emotional facial expressions. Although previous studies were able to classify stress from emotional expressions with accuracies of up to 88.32%, most works employed a classification approach and relied on data from contexts where stress was induced. Therefore, the primary aim of the present study was to clarify whether stress can be detected from facial expressions of six basic emotions (anxiety, anger, disgust, sadness, joy, love) and relaxation using a prediction approach.</p><p><strong>Method: </strong>To attain this goal, we analyzed video recordings of facial emotional expressions collected from n = 69 participants in a secondary analysis of a dataset from an interventional study. We aimed to explore associations with stress (assessed by the PSS-10 and a one-item stress measure).</p><p><strong>Results: </strong>Comparing two regression machine learning models [Random Forest (RF) and XGBoost], we found that facial emotional expressions were promising indicators of stress scores, with model fit being best when data from all six emotional facial expressions was used to train the model (one-item stress measure: MSE (XGB) = 2.31, MAE (XGB) = 1.32, MSE (RF) = 3.86, MAE (RF) = 1.69; PSS-10: MSE (XGB) = 25.65, MAE (XGB) = 4.16, MSE (RF) = 26.32, MAE (RF) = 4.14). XGBoost showed to be more reliable for prediction, with lower error for both training and test data.</p><p><strong>Discussion: </strong>The findings provide further evidence that non-invasive video recordings can complement standard objective and subjective markers of stress.</p>\",\"PeriodicalId\":73078,\"journal\":{\"name\":\"Frontiers in digital health\",\"volume\":\"7 \",\"pages\":\"1578917\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12075543/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fdgth.2025.1578917\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2025.1578917","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

应力的有害后果突出了精确的应力检测的必要性,因为这为及时干预提供了一个窗口。然而,客观测量和主观测量都存在有效性限制。使用机器学习方法的非接触式传感技术提供了一种潜在的替代方案,可用于估计外部可见的生理变化(如情绪面部表情)带来的压力。虽然以前的研究能够将压力从情绪表达中分类出来,准确率高达88.32%,但大多数研究采用的是分类方法,并依赖于压力产生环境的数据。因此,本研究的主要目的是阐明是否可以通过预测方法从六种基本情绪(焦虑、愤怒、厌恶、悲伤、喜悦、爱)和放松的面部表情中检测到压力。方法:为了实现这一目标,我们对一项介入性研究数据集的二次分析中收集的n = 69名参与者的面部情绪表情视频进行了分析。我们的目的是探索与压力的关系(通过PSS-10和单项压力测量进行评估)。结果:比较两种回归机器学习模型[Random Forest (RF)和XGBoost],我们发现面部情绪表情是很有希望的压力评分指标,当使用所有六种情绪面部表情的数据来训练模型时,模型拟合最佳(单项压力测量:MSE (XGB) = 2.31, MAE (XGB) = 1.32, MSE (RF) = 3.86, MAE (RF) = 1.69;Pss-10: mse (xgb) = 25.65, mae (xgb) = 4.16, mse (rf) = 26.32, mae (rf) = 4.14)XGBoost在预测方面更可靠,训练和测试数据的误差都更低。讨论:研究结果提供了进一步的证据,证明非侵入性录像可以补充标准的客观和主观压力标记。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Stress can be detected during emotion-evoking smartphone use: a pilot study using machine learning.

Introduction: The detrimental consequences of stress highlight the need for precise stress detection, as this offers a window for timely intervention. However, both objective and subjective measurements suffer from validity limitations. Contactless sensing technologies using machine learning methods present a potential alternative and could be used to estimate stress from externally visible physiological changes, such as emotional facial expressions. Although previous studies were able to classify stress from emotional expressions with accuracies of up to 88.32%, most works employed a classification approach and relied on data from contexts where stress was induced. Therefore, the primary aim of the present study was to clarify whether stress can be detected from facial expressions of six basic emotions (anxiety, anger, disgust, sadness, joy, love) and relaxation using a prediction approach.

Method: To attain this goal, we analyzed video recordings of facial emotional expressions collected from n = 69 participants in a secondary analysis of a dataset from an interventional study. We aimed to explore associations with stress (assessed by the PSS-10 and a one-item stress measure).

Results: Comparing two regression machine learning models [Random Forest (RF) and XGBoost], we found that facial emotional expressions were promising indicators of stress scores, with model fit being best when data from all six emotional facial expressions was used to train the model (one-item stress measure: MSE (XGB) = 2.31, MAE (XGB) = 1.32, MSE (RF) = 3.86, MAE (RF) = 1.69; PSS-10: MSE (XGB) = 25.65, MAE (XGB) = 4.16, MSE (RF) = 26.32, MAE (RF) = 4.14). XGBoost showed to be more reliable for prediction, with lower error for both training and test data.

Discussion: The findings provide further evidence that non-invasive video recordings can complement standard objective and subjective markers of stress.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信