{"title":"儿科BurnNet:在真实成像条件下稳健的多类别分割和严重程度识别。","authors":"Xiang Li, Zhen Liu, Lei Liu","doi":"10.1177/20503121251360090","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To establish and validate a deep learning model that simultaneously segments pediatric burn wounds and grades burn depth under complex, real-world imaging conditions.</p><p><strong>Methods: </strong>We retrospectively collected 4785 smartphone or camera photographs from hospitalized children over 5 years and annotated 14,355 burn regions as superficial second-degree, deep second-degree, or third-degree. Images were resized to 256 × 256 pixels and augmented by flipping and random rotation. A DeepLabv3 network with a ResNet101 backbone was enhanced with channel- and spatial attention modules, dropout-reinforced Atrous Spatial Pyramid Pooling, and a weighted cross-entropy loss to counter class imbalance. Ten-fold cross-validation (60 epochs, batch size 8) was performed using the Adam optimizer (learning rate 1 × 10⁻⁴).</p><p><strong>Results: </strong>The proposed Deep Fusion Network (attention-enhanced DeepLabv3-ResNet101, Dfusion) model achieved a mean segmentation Dice coefficient of 0.8766 ± 0.012 and an intersection-over-union of 0.8052 ± 0.015. Classification results demonstrated an accuracy of 97.65%, precision of 88.26%, recall of 86.76%, and an F1-score of 85.33%. Receiver operating characteristic curve analysis yielded area under the curve values of 0.82 for superficial second-degree, 0.76 for deep second-degree, and 0.78 for third-degree burns. Compared with baseline DeepLabv3, FCN-ResNet101, U-Net-ResNet101, and MobileNet models, Dfusion improved Dice by 15.2%-19.7% and intersection-over-union by 14.9%-23.5% (all <i>p</i> < 0.01). Inference speed was 0.38 ± 0.03 s per image on an NVIDIA GTX 1060 GPU, highlighting the modest computational demands suitable for mobile deployment.</p><p><strong>Conclusion: </strong>Dfusion provides accurate, end-to-end segmentation and depth grading of pediatric burn wounds captured in uncontrolled environments. Its robust performance and modest computational demand support deployment on mobile devices, offering rapid, objective assistance for clinicians in resource-limited settings and enabling more precise triage and treatment planning for pediatric burn care.</p>","PeriodicalId":21398,"journal":{"name":"SAGE Open Medicine","volume":"13 ","pages":"20503121251360090"},"PeriodicalIF":2.1000,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12301600/pdf/","citationCount":"0","resultStr":"{\"title\":\"Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions.\",\"authors\":\"Xiang Li, Zhen Liu, Lei Liu\",\"doi\":\"10.1177/20503121251360090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>To establish and validate a deep learning model that simultaneously segments pediatric burn wounds and grades burn depth under complex, real-world imaging conditions.</p><p><strong>Methods: </strong>We retrospectively collected 4785 smartphone or camera photographs from hospitalized children over 5 years and annotated 14,355 burn regions as superficial second-degree, deep second-degree, or third-degree. Images were resized to 256 × 256 pixels and augmented by flipping and random rotation. A DeepLabv3 network with a ResNet101 backbone was enhanced with channel- and spatial attention modules, dropout-reinforced Atrous Spatial Pyramid Pooling, and a weighted cross-entropy loss to counter class imbalance. Ten-fold cross-validation (60 epochs, batch size 8) was performed using the Adam optimizer (learning rate 1 × 10⁻⁴).</p><p><strong>Results: </strong>The proposed Deep Fusion Network (attention-enhanced DeepLabv3-ResNet101, Dfusion) model achieved a mean segmentation Dice coefficient of 0.8766 ± 0.012 and an intersection-over-union of 0.8052 ± 0.015. Classification results demonstrated an accuracy of 97.65%, precision of 88.26%, recall of 86.76%, and an F1-score of 85.33%. Receiver operating characteristic curve analysis yielded area under the curve values of 0.82 for superficial second-degree, 0.76 for deep second-degree, and 0.78 for third-degree burns. Compared with baseline DeepLabv3, FCN-ResNet101, U-Net-ResNet101, and MobileNet models, Dfusion improved Dice by 15.2%-19.7% and intersection-over-union by 14.9%-23.5% (all <i>p</i> < 0.01). Inference speed was 0.38 ± 0.03 s per image on an NVIDIA GTX 1060 GPU, highlighting the modest computational demands suitable for mobile deployment.</p><p><strong>Conclusion: </strong>Dfusion provides accurate, end-to-end segmentation and depth grading of pediatric burn wounds captured in uncontrolled environments. Its robust performance and modest computational demand support deployment on mobile devices, offering rapid, objective assistance for clinicians in resource-limited settings and enabling more precise triage and treatment planning for pediatric burn care.</p>\",\"PeriodicalId\":21398,\"journal\":{\"name\":\"SAGE Open Medicine\",\"volume\":\"13 \",\"pages\":\"20503121251360090\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12301600/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SAGE Open Medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/20503121251360090\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SAGE Open Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/20503121251360090","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions.
Objective: To establish and validate a deep learning model that simultaneously segments pediatric burn wounds and grades burn depth under complex, real-world imaging conditions.
Methods: We retrospectively collected 4785 smartphone or camera photographs from hospitalized children over 5 years and annotated 14,355 burn regions as superficial second-degree, deep second-degree, or third-degree. Images were resized to 256 × 256 pixels and augmented by flipping and random rotation. A DeepLabv3 network with a ResNet101 backbone was enhanced with channel- and spatial attention modules, dropout-reinforced Atrous Spatial Pyramid Pooling, and a weighted cross-entropy loss to counter class imbalance. Ten-fold cross-validation (60 epochs, batch size 8) was performed using the Adam optimizer (learning rate 1 × 10⁻⁴).
Results: The proposed Deep Fusion Network (attention-enhanced DeepLabv3-ResNet101, Dfusion) model achieved a mean segmentation Dice coefficient of 0.8766 ± 0.012 and an intersection-over-union of 0.8052 ± 0.015. Classification results demonstrated an accuracy of 97.65%, precision of 88.26%, recall of 86.76%, and an F1-score of 85.33%. Receiver operating characteristic curve analysis yielded area under the curve values of 0.82 for superficial second-degree, 0.76 for deep second-degree, and 0.78 for third-degree burns. Compared with baseline DeepLabv3, FCN-ResNet101, U-Net-ResNet101, and MobileNet models, Dfusion improved Dice by 15.2%-19.7% and intersection-over-union by 14.9%-23.5% (all p < 0.01). Inference speed was 0.38 ± 0.03 s per image on an NVIDIA GTX 1060 GPU, highlighting the modest computational demands suitable for mobile deployment.
Conclusion: Dfusion provides accurate, end-to-end segmentation and depth grading of pediatric burn wounds captured in uncontrolled environments. Its robust performance and modest computational demand support deployment on mobile devices, offering rapid, objective assistance for clinicians in resource-limited settings and enabling more precise triage and treatment planning for pediatric burn care.