Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions.

IF 2.1 Q2 MEDICINE, GENERAL & INTERNAL
SAGE Open Medicine Pub Date : 2025-07-24 eCollection Date: 2025-01-01 DOI:10.1177/20503121251360090
Xiang Li, Zhen Liu, Lei Liu
{"title":"Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions.","authors":"Xiang Li, Zhen Liu, Lei Liu","doi":"10.1177/20503121251360090","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To establish and validate a deep learning model that simultaneously segments pediatric burn wounds and grades burn depth under complex, real-world imaging conditions.</p><p><strong>Methods: </strong>We retrospectively collected 4785 smartphone or camera photographs from hospitalized children over 5 years and annotated 14,355 burn regions as superficial second-degree, deep second-degree, or third-degree. Images were resized to 256 × 256 pixels and augmented by flipping and random rotation. A DeepLabv3 network with a ResNet101 backbone was enhanced with channel- and spatial attention modules, dropout-reinforced Atrous Spatial Pyramid Pooling, and a weighted cross-entropy loss to counter class imbalance. Ten-fold cross-validation (60 epochs, batch size 8) was performed using the Adam optimizer (learning rate 1 × 10⁻⁴).</p><p><strong>Results: </strong>The proposed Deep Fusion Network (attention-enhanced DeepLabv3-ResNet101, Dfusion) model achieved a mean segmentation Dice coefficient of 0.8766 ± 0.012 and an intersection-over-union of 0.8052 ± 0.015. Classification results demonstrated an accuracy of 97.65%, precision of 88.26%, recall of 86.76%, and an F1-score of 85.33%. Receiver operating characteristic curve analysis yielded area under the curve values of 0.82 for superficial second-degree, 0.76 for deep second-degree, and 0.78 for third-degree burns. Compared with baseline DeepLabv3, FCN-ResNet101, U-Net-ResNet101, and MobileNet models, Dfusion improved Dice by 15.2%-19.7% and intersection-over-union by 14.9%-23.5% (all <i>p</i> < 0.01). Inference speed was 0.38 ± 0.03 s per image on an NVIDIA GTX 1060 GPU, highlighting the modest computational demands suitable for mobile deployment.</p><p><strong>Conclusion: </strong>Dfusion provides accurate, end-to-end segmentation and depth grading of pediatric burn wounds captured in uncontrolled environments. Its robust performance and modest computational demand support deployment on mobile devices, offering rapid, objective assistance for clinicians in resource-limited settings and enabling more precise triage and treatment planning for pediatric burn care.</p>","PeriodicalId":21398,"journal":{"name":"SAGE Open Medicine","volume":"13 ","pages":"20503121251360090"},"PeriodicalIF":2.1000,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12301600/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SAGE Open Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/20503121251360090","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: To establish and validate a deep learning model that simultaneously segments pediatric burn wounds and grades burn depth under complex, real-world imaging conditions.

Methods: We retrospectively collected 4785 smartphone or camera photographs from hospitalized children over 5 years and annotated 14,355 burn regions as superficial second-degree, deep second-degree, or third-degree. Images were resized to 256 × 256 pixels and augmented by flipping and random rotation. A DeepLabv3 network with a ResNet101 backbone was enhanced with channel- and spatial attention modules, dropout-reinforced Atrous Spatial Pyramid Pooling, and a weighted cross-entropy loss to counter class imbalance. Ten-fold cross-validation (60 epochs, batch size 8) was performed using the Adam optimizer (learning rate 1 × 10⁻⁴).

Results: The proposed Deep Fusion Network (attention-enhanced DeepLabv3-ResNet101, Dfusion) model achieved a mean segmentation Dice coefficient of 0.8766 ± 0.012 and an intersection-over-union of 0.8052 ± 0.015. Classification results demonstrated an accuracy of 97.65%, precision of 88.26%, recall of 86.76%, and an F1-score of 85.33%. Receiver operating characteristic curve analysis yielded area under the curve values of 0.82 for superficial second-degree, 0.76 for deep second-degree, and 0.78 for third-degree burns. Compared with baseline DeepLabv3, FCN-ResNet101, U-Net-ResNet101, and MobileNet models, Dfusion improved Dice by 15.2%-19.7% and intersection-over-union by 14.9%-23.5% (all p < 0.01). Inference speed was 0.38 ± 0.03 s per image on an NVIDIA GTX 1060 GPU, highlighting the modest computational demands suitable for mobile deployment.

Conclusion: Dfusion provides accurate, end-to-end segmentation and depth grading of pediatric burn wounds captured in uncontrolled environments. Its robust performance and modest computational demand support deployment on mobile devices, offering rapid, objective assistance for clinicians in resource-limited settings and enabling more precise triage and treatment planning for pediatric burn care.

Abstract Image

Abstract Image

Abstract Image

儿科BurnNet:在真实成像条件下稳健的多类别分割和严重程度识别。
目的:建立并验证一种深度学习模型,该模型可以在复杂的真实成像条件下同时分割儿科烧伤创面并对烧伤深度进行分级。方法:回顾性收集4785张住院儿童5年以上的智能手机或相机照片,并将14355个烧伤区域标注为浅二度、深二度或三度。图像被调整为256 × 256像素,并通过翻转和随机旋转来增强。基于ResNet101骨干网的DeepLabv3网络通过通道和空间关注模块、drop- reinforced Atrous spatial Pyramid Pooling和加权交叉熵损失来对抗类不平衡。使用Adam优化器(学习率1 × 10⁻⁴)进行10倍交叉验证(60个epoch,批大小8)。结果:提出的Deep Fusion Network (attention-enhanced DeepLabv3-ResNet101, Dfusion)模型的平均分割Dice系数为0.8766±0.012,交叉过并(intersection- overunion)系数为0.8052±0.015。分类结果准确率为97.65%,准确率为88.26%,召回率为86.76%,f1评分为85.33%。受试者工作特征曲线分析得出浅二度烧伤曲线下面积为0.82,深二度烧伤为0.76,三度烧伤为0.78。与基线DeepLabv3、fnn - resnet101、U-Net-ResNet101和MobileNet模型相比,弥散模型使Dice提高了15.2%-19.7%,交叉融合提高了14.9%-23.5%(均为p)结论:弥散模型提供了在非控制环境中捕获的儿童烧伤创面的准确、端到端分割和深度分级。其强大的性能和适度的计算需求支持在移动设备上的部署,为资源有限的临床医生提供快速、客观的帮助,并为儿科烧伤护理提供更精确的分诊和治疗计划。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
SAGE Open Medicine
SAGE Open Medicine MEDICINE, GENERAL & INTERNAL-
CiteScore
3.50
自引率
4.30%
发文量
289
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信