基于膈超声的AI模型提高有创机械通气脱机预测性能:前瞻性队列研究。

IF 2 Q3 HEALTH CARE SCIENCES & SERVICES
Feier Song, Huazhang Liu, Huan Ma, Xuanhui Chen, Shouhong Wang, Tiehe Qin, Huiying Liang, Daozheng Huang
{"title":"基于膈超声的AI模型提高有创机械通气脱机预测性能:前瞻性队列研究。","authors":"Feier Song, Huazhang Liu, Huan Ma, Xuanhui Chen, Shouhong Wang, Tiehe Qin, Huiying Liang, Daozheng Huang","doi":"10.2196/72482","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Point-of-care ultrasonography has become a valuable tool for assessing diaphragmatic function in critically ill patients receiving invasive mechanical ventilation. However, conventional diaphragm ultrasound assessment remains highly operator-dependent and subjective. Previous research introduced automatic measurement of diaphragmatic excursion and velocity using 2D speckle-tracking technology.</p><p><strong>Objective: </strong>This study aimed to develop an artificial intelligence-multimodal learning framework to improve the prediction of weaning failure and guide individualized weaning strategies.</p><p><strong>Methods: </strong>This prospective study enrolled critically ill patients older than 18 years who received mechanical ventilation for more than 48 hours and were eligible for a spontaneous breathing trial in 2 intensive care units in Guangzhou, China. Before the spontaneous breathing trial, diaphragm ultrasound videos were collected using a standardized protocol, and automatic measurements of excursion and velocity were obtained. A total of 88 patients were included, with 50 successfully weaned and 38 experiencing weaning failure. Each patient record included 27 clinical and 6 diaphragmatic indicators, selected based on previous literature and phenotyping studies. Clinical variables were preprocessed using OneHotEncoder, normalization, and scaling. Ultrasound videos were interpolated to a uniform resolution of 224×224×96. Artificial intelligence-multimodal learning based on clinical characteristics, laboratory parameters, and diaphragm ultrasonic videos was established. Four experiments were conducted in an ablation setting to evaluate model performance using different combinations of input data: (1) diaphragmatic excursion only, (2) clinical and diaphragmatic indicators, (3) ultrasound videos only, and (4) all modalities combined (multimodal). Metrics for evaluation included classification accuracy, area under the receiver operating characteristic curve (AUC), average precision in the precision-recall curve, and calibration curve. Variable importance was assessed using SHAP (Shapley Additive Explanation) to interpret feature contributions and understand model predictions.</p><p><strong>Results: </strong>The multimodal co-learning model outperformed all single-modal approaches. The accuracy improved when predicted through diaphragm ultrasound video data using Video Vision Transformer (accuracy=0.8095, AUC=0.852), clinical or ultrasound indicators (accuracy=0.7381, AUC=0.746), and the multimodal co-learning (accuracy=0.8331, AUC=0.894). The proposed co-learning model achieved the highest score (average precision=0.91) among the 4 experiments. Furthermore, calibration curve analysis demonstrated that the proposed colearning model was well calibrated, as the curve was closest to the perfectly calibrated line.</p><p><strong>Conclusions: </strong>Combining ultrasound and clinical data for colearning improved the accuracy of the weaning outcome prediction. Multimodal learning based on automatic measurement of point-of-care ultrasonography and automated collection of objective clinical indicators greatly enhanced the practical operability and user-friendliness of the system. The proposed model offered promising potential for widespread clinical application in intensive care settings.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e72482"},"PeriodicalIF":2.0000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12418127/pdf/","citationCount":"0","resultStr":"{\"title\":\"AI Model Based on Diaphragm Ultrasound to Improve the Predictive Performance of Invasive Mechanical Ventilation Weaning: Prospective Cohort Study.\",\"authors\":\"Feier Song, Huazhang Liu, Huan Ma, Xuanhui Chen, Shouhong Wang, Tiehe Qin, Huiying Liang, Daozheng Huang\",\"doi\":\"10.2196/72482\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Point-of-care ultrasonography has become a valuable tool for assessing diaphragmatic function in critically ill patients receiving invasive mechanical ventilation. However, conventional diaphragm ultrasound assessment remains highly operator-dependent and subjective. Previous research introduced automatic measurement of diaphragmatic excursion and velocity using 2D speckle-tracking technology.</p><p><strong>Objective: </strong>This study aimed to develop an artificial intelligence-multimodal learning framework to improve the prediction of weaning failure and guide individualized weaning strategies.</p><p><strong>Methods: </strong>This prospective study enrolled critically ill patients older than 18 years who received mechanical ventilation for more than 48 hours and were eligible for a spontaneous breathing trial in 2 intensive care units in Guangzhou, China. Before the spontaneous breathing trial, diaphragm ultrasound videos were collected using a standardized protocol, and automatic measurements of excursion and velocity were obtained. A total of 88 patients were included, with 50 successfully weaned and 38 experiencing weaning failure. Each patient record included 27 clinical and 6 diaphragmatic indicators, selected based on previous literature and phenotyping studies. Clinical variables were preprocessed using OneHotEncoder, normalization, and scaling. Ultrasound videos were interpolated to a uniform resolution of 224×224×96. Artificial intelligence-multimodal learning based on clinical characteristics, laboratory parameters, and diaphragm ultrasonic videos was established. Four experiments were conducted in an ablation setting to evaluate model performance using different combinations of input data: (1) diaphragmatic excursion only, (2) clinical and diaphragmatic indicators, (3) ultrasound videos only, and (4) all modalities combined (multimodal). Metrics for evaluation included classification accuracy, area under the receiver operating characteristic curve (AUC), average precision in the precision-recall curve, and calibration curve. Variable importance was assessed using SHAP (Shapley Additive Explanation) to interpret feature contributions and understand model predictions.</p><p><strong>Results: </strong>The multimodal co-learning model outperformed all single-modal approaches. The accuracy improved when predicted through diaphragm ultrasound video data using Video Vision Transformer (accuracy=0.8095, AUC=0.852), clinical or ultrasound indicators (accuracy=0.7381, AUC=0.746), and the multimodal co-learning (accuracy=0.8331, AUC=0.894). The proposed co-learning model achieved the highest score (average precision=0.91) among the 4 experiments. Furthermore, calibration curve analysis demonstrated that the proposed colearning model was well calibrated, as the curve was closest to the perfectly calibrated line.</p><p><strong>Conclusions: </strong>Combining ultrasound and clinical data for colearning improved the accuracy of the weaning outcome prediction. Multimodal learning based on automatic measurement of point-of-care ultrasonography and automated collection of objective clinical indicators greatly enhanced the practical operability and user-friendliness of the system. The proposed model offered promising potential for widespread clinical application in intensive care settings.</p>\",\"PeriodicalId\":14841,\"journal\":{\"name\":\"JMIR Formative Research\",\"volume\":\"9 \",\"pages\":\"e72482\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2025-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12418127/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"JMIR Formative Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2196/72482\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Formative Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/72482","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

背景:在接受有创机械通气的危重患者中,即时超声检查已成为评估膈功能的一种有价值的工具。然而,传统的隔膜超声评估仍然高度依赖于操作者和主观。先前的研究介绍了利用二维散斑跟踪技术自动测量横膈膜偏移和速度。目的:本研究旨在建立一个人工智能-多模态学习框架,以提高对断奶失败的预测,并指导个性化的断奶策略。方法:本前瞻性研究纳入了年龄大于18岁、接受机械通气超过48小时且符合自主呼吸试验条件的中国广州2个重症监护病房的危重患者。在自主呼吸试验前,采用标准化方案收集膈超声视频,并自动测量偏移和速度。共纳入88例患者,其中50例成功断奶,38例断奶失败。每个患者记录包括27个临床指标和6个膈肌指标,这些指标是根据先前的文献和表型研究选择的。使用OneHotEncoder对临床变量进行预处理、归一化和缩放。超声视频被插值到224×224×96的统一分辨率。建立了基于临床特征、实验室参数和膈超声视频的人工智能-多模态学习。在消融设置中进行了四项实验,以使用不同的输入数据组合来评估模型的性能:(1)仅膈肌偏移,(2)临床和膈肌指标,(3)仅超声视频,以及(4)所有模式组合(多模式)。评价指标包括分类准确度、接收者工作特征曲线下面积、精密度-召回率曲线平均精密度和校准曲线。变量重要性评估使用SHAP (Shapley加性解释)来解释特征贡献和理解模型预测。结果:多模态共同学习模型优于所有单模态方法。使用video Vision Transformer(准确率=0.8095,AUC=0.852)、临床或超声指标(准确率=0.7381,AUC=0.746)和多模态共同学习(准确率=0.8331,AUC=0.894)对隔膜超声视频数据进行预测,准确率均有提高。所提出的共同学习模型在4个实验中得分最高,平均精度为0.91。此外,校正曲线分析表明,该模型校正效果良好,校正曲线最接近完美校正线。结论:超声与临床资料相结合,提高了断奶结局预测的准确性。基于现场超声自动测量和客观临床指标自动采集的多模态学习,极大地增强了系统的实际操作性和用户友好性。所提出的模型为重症监护环境的广泛临床应用提供了有希望的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

AI Model Based on Diaphragm Ultrasound to Improve the Predictive Performance of Invasive Mechanical Ventilation Weaning: Prospective Cohort Study.

AI Model Based on Diaphragm Ultrasound to Improve the Predictive Performance of Invasive Mechanical Ventilation Weaning: Prospective Cohort Study.

AI Model Based on Diaphragm Ultrasound to Improve the Predictive Performance of Invasive Mechanical Ventilation Weaning: Prospective Cohort Study.

AI Model Based on Diaphragm Ultrasound to Improve the Predictive Performance of Invasive Mechanical Ventilation Weaning: Prospective Cohort Study.

Background: Point-of-care ultrasonography has become a valuable tool for assessing diaphragmatic function in critically ill patients receiving invasive mechanical ventilation. However, conventional diaphragm ultrasound assessment remains highly operator-dependent and subjective. Previous research introduced automatic measurement of diaphragmatic excursion and velocity using 2D speckle-tracking technology.

Objective: This study aimed to develop an artificial intelligence-multimodal learning framework to improve the prediction of weaning failure and guide individualized weaning strategies.

Methods: This prospective study enrolled critically ill patients older than 18 years who received mechanical ventilation for more than 48 hours and were eligible for a spontaneous breathing trial in 2 intensive care units in Guangzhou, China. Before the spontaneous breathing trial, diaphragm ultrasound videos were collected using a standardized protocol, and automatic measurements of excursion and velocity were obtained. A total of 88 patients were included, with 50 successfully weaned and 38 experiencing weaning failure. Each patient record included 27 clinical and 6 diaphragmatic indicators, selected based on previous literature and phenotyping studies. Clinical variables were preprocessed using OneHotEncoder, normalization, and scaling. Ultrasound videos were interpolated to a uniform resolution of 224×224×96. Artificial intelligence-multimodal learning based on clinical characteristics, laboratory parameters, and diaphragm ultrasonic videos was established. Four experiments were conducted in an ablation setting to evaluate model performance using different combinations of input data: (1) diaphragmatic excursion only, (2) clinical and diaphragmatic indicators, (3) ultrasound videos only, and (4) all modalities combined (multimodal). Metrics for evaluation included classification accuracy, area under the receiver operating characteristic curve (AUC), average precision in the precision-recall curve, and calibration curve. Variable importance was assessed using SHAP (Shapley Additive Explanation) to interpret feature contributions and understand model predictions.

Results: The multimodal co-learning model outperformed all single-modal approaches. The accuracy improved when predicted through diaphragm ultrasound video data using Video Vision Transformer (accuracy=0.8095, AUC=0.852), clinical or ultrasound indicators (accuracy=0.7381, AUC=0.746), and the multimodal co-learning (accuracy=0.8331, AUC=0.894). The proposed co-learning model achieved the highest score (average precision=0.91) among the 4 experiments. Furthermore, calibration curve analysis demonstrated that the proposed colearning model was well calibrated, as the curve was closest to the perfectly calibrated line.

Conclusions: Combining ultrasound and clinical data for colearning improved the accuracy of the weaning outcome prediction. Multimodal learning based on automatic measurement of point-of-care ultrasonography and automated collection of objective clinical indicators greatly enhanced the practical operability and user-friendliness of the system. The proposed model offered promising potential for widespread clinical application in intensive care settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
JMIR Formative Research
JMIR Formative Research Medicine-Medicine (miscellaneous)
CiteScore
2.70
自引率
9.10%
发文量
579
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信