压疮分期视觉分类的卷积神经网络模型:横断面研究。

IF 3.1 3区 医学 Q2 MEDICAL INFORMATICS
Changbin Lei, Yan Jiang, Ke Xu, Shanshan Liu, Hua Cao, Cong Wang
{"title":"压疮分期视觉分类的卷积神经网络模型:横断面研究。","authors":"Changbin Lei, Yan Jiang, Ke Xu, Shanshan Liu, Hua Cao, Cong Wang","doi":"10.2196/62774","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Pressure injuries (PIs) pose a negative health impact and a substantial economic burden on patients and society. Accurate staging is crucial for treating PIs. Owing to the diversity in the clinical manifestations of PIs and the lack of objective biochemical and pathological examinations, accurate staging of PIs is a major challenge. The deep learning algorithm, which uses convolutional neural networks (CNNs), has demonstrated exceptional classification performance in the intricate domain of skin diseases and wounds and has the potential to improve the staging accuracy of PIs.</p><p><strong>Objective: </strong>We explored the potential of applying AlexNet, VGGNet16, ResNet18, and DenseNet121 to PI staging, aiming to provide an effective tool to assist in staging.</p><p><strong>Methods: </strong>PI images from patients-including those with stage I, stage II, stage III, stage IV, unstageable, and suspected deep tissue injury (SDTI)-were collected at a tertiary hospital in China. Additionally, we augmented the PI data by cropping and flipping the PI images 9 times. The collected images were then divided into training, validation, and test sets at a ratio of 8:1:1. We subsequently trained them via AlexNet, VGGNet16, ResNet18, and DenseNet121 to develop staging models.</p><p><strong>Results: </strong>We collected 853 raw PI images with the following distributions across stages: stage I (n=148), stage II (n=121), stage III (n=216), stage IV (n=110), unstageable (n=128), and SDTI (n=130). A total of 7677 images were obtained after data augmentation. Among all the CNN models, DenseNet121 demonstrated the highest overall accuracy of 93.71%. The classification performances of AlexNet, VGGNet16, and ResNet18 exhibited overall accuracies of 87.74%, 82.42%, and 92.42%, respectively.</p><p><strong>Conclusions: </strong>The CNN-based models demonstrated strong classification ability for PI images, which might promote highly efficient, intelligent PI staging methods. In the future, the models can be compared with nurses with different levels of experience to further verify the clinical application effect.</p>","PeriodicalId":56334,"journal":{"name":"JMIR Medical Informatics","volume":"13 ","pages":"e62774"},"PeriodicalIF":3.1000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11962570/pdf/","citationCount":"0","resultStr":"{\"title\":\"Convolutional Neural Network Models for Visual Classification of Pressure Ulcer Stages: Cross-Sectional Study.\",\"authors\":\"Changbin Lei, Yan Jiang, Ke Xu, Shanshan Liu, Hua Cao, Cong Wang\",\"doi\":\"10.2196/62774\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Pressure injuries (PIs) pose a negative health impact and a substantial economic burden on patients and society. Accurate staging is crucial for treating PIs. Owing to the diversity in the clinical manifestations of PIs and the lack of objective biochemical and pathological examinations, accurate staging of PIs is a major challenge. The deep learning algorithm, which uses convolutional neural networks (CNNs), has demonstrated exceptional classification performance in the intricate domain of skin diseases and wounds and has the potential to improve the staging accuracy of PIs.</p><p><strong>Objective: </strong>We explored the potential of applying AlexNet, VGGNet16, ResNet18, and DenseNet121 to PI staging, aiming to provide an effective tool to assist in staging.</p><p><strong>Methods: </strong>PI images from patients-including those with stage I, stage II, stage III, stage IV, unstageable, and suspected deep tissue injury (SDTI)-were collected at a tertiary hospital in China. Additionally, we augmented the PI data by cropping and flipping the PI images 9 times. The collected images were then divided into training, validation, and test sets at a ratio of 8:1:1. We subsequently trained them via AlexNet, VGGNet16, ResNet18, and DenseNet121 to develop staging models.</p><p><strong>Results: </strong>We collected 853 raw PI images with the following distributions across stages: stage I (n=148), stage II (n=121), stage III (n=216), stage IV (n=110), unstageable (n=128), and SDTI (n=130). A total of 7677 images were obtained after data augmentation. Among all the CNN models, DenseNet121 demonstrated the highest overall accuracy of 93.71%. The classification performances of AlexNet, VGGNet16, and ResNet18 exhibited overall accuracies of 87.74%, 82.42%, and 92.42%, respectively.</p><p><strong>Conclusions: </strong>The CNN-based models demonstrated strong classification ability for PI images, which might promote highly efficient, intelligent PI staging methods. In the future, the models can be compared with nurses with different levels of experience to further verify the clinical application effect.</p>\",\"PeriodicalId\":56334,\"journal\":{\"name\":\"JMIR Medical Informatics\",\"volume\":\"13 \",\"pages\":\"e62774\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11962570/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"JMIR Medical Informatics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.2196/62774\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MEDICAL INFORMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Medical Informatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/62774","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

摘要

背景:压力性损伤(PIs)对健康造成负面影响,给患者和社会带来巨大的经济负担。准确的分期是治疗pi的关键。由于pi临床表现的多样性和缺乏客观的生化和病理检查,准确的pi分期是一个重大挑战。使用卷积神经网络(cnn)的深度学习算法在皮肤疾病和伤口的复杂领域表现出卓越的分类性能,并有可能提高pi的分期准确性。目的:探讨AlexNet、VGGNet16、ResNet18和DenseNet121在PI分期中的应用潜力,旨在为PI分期提供一种有效的辅助工具。方法:收集中国某三级医院I期、II期、III期、IV期、无法分期和疑似深部组织损伤(SDTI)患者的PI图像。此外,我们通过裁剪和翻转PI图像9次来增强PI数据。然后将收集到的图像按8:1:1的比例分为训练集、验证集和测试集。随后,我们通过AlexNet、VGGNet16、ResNet18和DenseNet121对它们进行了训练,以开发分期模型。结果:我们收集了853张原始PI图像,其分期分布如下:I期(n=148)、II期(n=121)、III期(n=216)、IV期(n=110)、不可分期(n=128)和SDTI (n=130)。数据增强后共得到7677张图像。在所有CNN模型中,DenseNet121的整体准确率最高,为93.71%。AlexNet、VGGNet16和ResNet18的分类性能总体准确率分别为87.74%、82.42%和92.42%。结论:基于cnn的模型对PI图像具有较强的分类能力,可推广高效、智能的PI分期方法。未来可将模型与不同经验水平的护士进行对比,进一步验证临床应用效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Convolutional Neural Network Models for Visual Classification of Pressure Ulcer Stages: Cross-Sectional Study.

Background: Pressure injuries (PIs) pose a negative health impact and a substantial economic burden on patients and society. Accurate staging is crucial for treating PIs. Owing to the diversity in the clinical manifestations of PIs and the lack of objective biochemical and pathological examinations, accurate staging of PIs is a major challenge. The deep learning algorithm, which uses convolutional neural networks (CNNs), has demonstrated exceptional classification performance in the intricate domain of skin diseases and wounds and has the potential to improve the staging accuracy of PIs.

Objective: We explored the potential of applying AlexNet, VGGNet16, ResNet18, and DenseNet121 to PI staging, aiming to provide an effective tool to assist in staging.

Methods: PI images from patients-including those with stage I, stage II, stage III, stage IV, unstageable, and suspected deep tissue injury (SDTI)-were collected at a tertiary hospital in China. Additionally, we augmented the PI data by cropping and flipping the PI images 9 times. The collected images were then divided into training, validation, and test sets at a ratio of 8:1:1. We subsequently trained them via AlexNet, VGGNet16, ResNet18, and DenseNet121 to develop staging models.

Results: We collected 853 raw PI images with the following distributions across stages: stage I (n=148), stage II (n=121), stage III (n=216), stage IV (n=110), unstageable (n=128), and SDTI (n=130). A total of 7677 images were obtained after data augmentation. Among all the CNN models, DenseNet121 demonstrated the highest overall accuracy of 93.71%. The classification performances of AlexNet, VGGNet16, and ResNet18 exhibited overall accuracies of 87.74%, 82.42%, and 92.42%, respectively.

Conclusions: The CNN-based models demonstrated strong classification ability for PI images, which might promote highly efficient, intelligent PI staging methods. In the future, the models can be compared with nurses with different levels of experience to further verify the clinical application effect.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
JMIR Medical Informatics
JMIR Medical Informatics Medicine-Health Informatics
CiteScore
7.90
自引率
3.10%
发文量
173
审稿时长
12 weeks
期刊介绍: JMIR Medical Informatics (JMI, ISSN 2291-9694) is a top-rated, tier A journal which focuses on clinical informatics, big data in health and health care, decision support for health professionals, electronic health records, ehealth infrastructures and implementation. It has a focus on applied, translational research, with a broad readership including clinicians, CIOs, engineers, industry and health informatics professionals. Published by JMIR Publications, publisher of the Journal of Medical Internet Research (JMIR), the leading eHealth/mHealth journal (Impact Factor 2016: 5.175), JMIR Med Inform has a slightly different scope (emphasizing more on applications for clinicians and health professionals rather than consumers/citizens, which is the focus of JMIR), publishes even faster, and also allows papers which are more technical or more formative than what would be published in the Journal of Medical Internet Research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信