面部表情机器学习模型优于使用镇痛痛觉指数和生命体征预测术后疼痛强度的模型:一项试验研究。

IF 4.2 4区 医学 Q1 ANESTHESIOLOGY
Korean Journal of Anesthesiology Pub Date : 2024-04-01 Epub Date: 2024-01-05 DOI:10.4097/kja.23583
Insun Park, Jae Hyon Park, Jongjin Yoon, Hyo-Seok Na, Ah-Young Oh, Junghee Ryu, Bon-Wook Koo
{"title":"面部表情机器学习模型优于使用镇痛痛觉指数和生命体征预测术后疼痛强度的模型:一项试验研究。","authors":"Insun Park, Jae Hyon Park, Jongjin Yoon, Hyo-Seok Na, Ah-Young Oh, Junghee Ryu, Bon-Wook Koo","doi":"10.4097/kja.23583","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Few studies have evaluated the use of automated artificial intelligence (AI)-based pain recognition in postoperative settings or the correlation with pain intensity. In this study, various machine learning (ML)-based models using facial expressions, the analgesia nociception index (ANI), and vital signs were developed to predict postoperative pain intensity, and their performances for predicting severe postoperative pain were compared.</p><p><strong>Methods: </strong>In total, 155 facial expressions from patients who underwent gastrectomy were recorded postoperatively; one blinded anesthesiologist simultaneously recorded the ANI score, vital signs, and patient self-assessed pain intensity based on the 11-point numerical rating scale (NRS). The ML models' area under the receiver operating characteristic curves (AUROCs) were calculated and compared using DeLong's test.</p><p><strong>Results: </strong>ML models were constructed using facial expressions, ANI, vital signs, and different combinations of the three datasets. The ML model constructed using facial expressions best predicted an NRS ≥ 7 (AUROC 0.93) followed by the ML model combining facial expressions and vital signs (AUROC 0.84) in the test-set. ML models constructed using combined physiological signals (vital signs, ANI) performed better than models based on individual parameters for predicting NRS ≥ 7, although the AUROCs were inferior to those of the ML model based on facial expressions (all P < 0.050). Among these parameters, absolute and relative ANI had the worst AUROCs (0.69 and 0.68, respectively) for predicting NRS ≥ 7.</p><p><strong>Conclusions: </strong>The ML model constructed using facial expressions best predicted severe postoperative pain (NRS ≥ 7) and outperformed models constructed from physiological signals.</p>","PeriodicalId":17855,"journal":{"name":"Korean Journal of Anesthesiology","volume":" ","pages":"195-204"},"PeriodicalIF":4.2000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982524/pdf/","citationCount":"0","resultStr":"{\"title\":\"Machine learning model of facial expression outperforms models using analgesia nociception index and vital signs to predict postoperative pain intensity: a pilot study.\",\"authors\":\"Insun Park, Jae Hyon Park, Jongjin Yoon, Hyo-Seok Na, Ah-Young Oh, Junghee Ryu, Bon-Wook Koo\",\"doi\":\"10.4097/kja.23583\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Few studies have evaluated the use of automated artificial intelligence (AI)-based pain recognition in postoperative settings or the correlation with pain intensity. In this study, various machine learning (ML)-based models using facial expressions, the analgesia nociception index (ANI), and vital signs were developed to predict postoperative pain intensity, and their performances for predicting severe postoperative pain were compared.</p><p><strong>Methods: </strong>In total, 155 facial expressions from patients who underwent gastrectomy were recorded postoperatively; one blinded anesthesiologist simultaneously recorded the ANI score, vital signs, and patient self-assessed pain intensity based on the 11-point numerical rating scale (NRS). The ML models' area under the receiver operating characteristic curves (AUROCs) were calculated and compared using DeLong's test.</p><p><strong>Results: </strong>ML models were constructed using facial expressions, ANI, vital signs, and different combinations of the three datasets. The ML model constructed using facial expressions best predicted an NRS ≥ 7 (AUROC 0.93) followed by the ML model combining facial expressions and vital signs (AUROC 0.84) in the test-set. ML models constructed using combined physiological signals (vital signs, ANI) performed better than models based on individual parameters for predicting NRS ≥ 7, although the AUROCs were inferior to those of the ML model based on facial expressions (all P < 0.050). Among these parameters, absolute and relative ANI had the worst AUROCs (0.69 and 0.68, respectively) for predicting NRS ≥ 7.</p><p><strong>Conclusions: </strong>The ML model constructed using facial expressions best predicted severe postoperative pain (NRS ≥ 7) and outperformed models constructed from physiological signals.</p>\",\"PeriodicalId\":17855,\"journal\":{\"name\":\"Korean Journal of Anesthesiology\",\"volume\":\" \",\"pages\":\"195-204\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982524/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Korean Journal of Anesthesiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.4097/kja.23583\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/5 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"ANESTHESIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Korean Journal of Anesthesiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.4097/kja.23583","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/5 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ANESTHESIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景:很少有研究对基于人工智能(AI)的自动疼痛识别在术后环境中的应用或与疼痛强度的相关性进行评估。本研究利用面部表情、镇痛痛觉指数(ANI)和生命体征开发了多种基于机器学习(ML)的模型来预测术后疼痛强度,并比较了它们在预测严重术后疼痛方面的表现:方法:共记录了155名胃切除术患者的术后面部表情,由一名盲法麻醉师同时记录ANI评分、生命体征和患者根据11点数字评分量表(NRS)自我评估的疼痛强度。使用 DeLong 检验计算并比较了 ML 模型的接收者操作特征曲线下面积(AUROCs):使用面部表情、ANI、生命体征和三个数据集的不同组合构建了 ML 模型。在测试集中,使用面部表情构建的 ML 模型对 NRS ≥ 7 的预测效果最好(AUROC 0.93),其次是结合面部表情和生命体征的 ML 模型(AUROC 0.84)。使用综合生理信号(生命体征、ANI)构建的 ML 模型在预测 NRS ≥ 7 方面的表现优于基于单个参数的模型,尽管 AUROC 不如基于面部表情的 ML 模型(所有 P <0.050)。在这些参数中,绝对和相对 ANI 预测 NRS ≥ 7 的 AUROC 最差(分别为 0.69 和 0.68):结论:使用面部表情构建的 ML 模型对严重术后疼痛(NRS ≥ 7)的预测效果最佳,优于使用生理信号构建的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Machine learning model of facial expression outperforms models using analgesia nociception index and vital signs to predict postoperative pain intensity: a pilot study.

Background: Few studies have evaluated the use of automated artificial intelligence (AI)-based pain recognition in postoperative settings or the correlation with pain intensity. In this study, various machine learning (ML)-based models using facial expressions, the analgesia nociception index (ANI), and vital signs were developed to predict postoperative pain intensity, and their performances for predicting severe postoperative pain were compared.

Methods: In total, 155 facial expressions from patients who underwent gastrectomy were recorded postoperatively; one blinded anesthesiologist simultaneously recorded the ANI score, vital signs, and patient self-assessed pain intensity based on the 11-point numerical rating scale (NRS). The ML models' area under the receiver operating characteristic curves (AUROCs) were calculated and compared using DeLong's test.

Results: ML models were constructed using facial expressions, ANI, vital signs, and different combinations of the three datasets. The ML model constructed using facial expressions best predicted an NRS ≥ 7 (AUROC 0.93) followed by the ML model combining facial expressions and vital signs (AUROC 0.84) in the test-set. ML models constructed using combined physiological signals (vital signs, ANI) performed better than models based on individual parameters for predicting NRS ≥ 7, although the AUROCs were inferior to those of the ML model based on facial expressions (all P < 0.050). Among these parameters, absolute and relative ANI had the worst AUROCs (0.69 and 0.68, respectively) for predicting NRS ≥ 7.

Conclusions: The ML model constructed using facial expressions best predicted severe postoperative pain (NRS ≥ 7) and outperformed models constructed from physiological signals.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.20
自引率
6.90%
发文量
84
审稿时长
16 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信