YOLOv11-based multi-task learning for enhanced bone fracture detection and classification in X-ray images

IF 1.7 4区 综合性期刊 Q2 MULTIDISCIPLINARY SCIENCES
Wanmian Wei , Yan Huang , Junchi Zheng , Yuanyong Rao , Yongping Wei , Xingyue Tan , Haiyang OuYang
{"title":"YOLOv11-based multi-task learning for enhanced bone fracture detection and classification in X-ray images","authors":"Wanmian Wei ,&nbsp;Yan Huang ,&nbsp;Junchi Zheng ,&nbsp;Yuanyong Rao ,&nbsp;Yongping Wei ,&nbsp;Xingyue Tan ,&nbsp;Haiyang OuYang","doi":"10.1016/j.jrras.2025.101309","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>This study presents a multi-task learning framework based on the YOLOv11 architecture to improve both fracture detection and localization. The goal is to provide an efficient solution for clinical applications.</div></div><div><h3>Materials and methods</h3><div>We used a large dataset of X-ray images, including both fracture and non-fracture cases from the upper and lower extremities. The dataset was divided into three parts: training (70%), validation (15%), and test (15%). The training set had 10,966 cases (5778 normal, 5188 with fractures), while the validation and test sets each contained 2350 cases (1238 normal, 1112 with fractures). A multi-task learning model based on YOLOv11 was trained for fracture classification and localization. We applied data augmentation to prevent overfitting and improve generalization. Model performance was evaluated using two metrics: mean Average Precision (mAP) and Intersection over Union (IoU), with comparisons made to Faster R-CNN and SSD models. Training was done with a learning rate of 0.001 and a batch size of 16, using the Adam optimizer for better convergence. We also benchmarked the YOLOv11 model against Faster R-CNN and SSD to assess performance using mAP and IoU scores at different thresholds.</div></div><div><h3>Results</h3><div>The YOLOv11 model achieved excellent results, with a mean Average Precision (mAP) of 96.8% at an IoU threshold of 0.5 and an IoU of 92.5%. These results were better than Faster R-CNN (mAP: 87.5%, IoU: 85.23%) and SSD (mAP: 82.9%, IoU: 80.12%), showing that YOLOv11 outperformed these models in fracture detection and localization. This improvement highlights the model's strength and efficiency for real-time use.</div></div><div><h3>Conclusions</h3><div>The YOLOv11-based multi-task learning framework significantly outperforms traditional methods, offering high accuracy and real-time fracture localization. This model shows great potential for clinical use, improving diagnostic accuracy, increasing productivity, and streamlining the workflow for radiologists.</div></div>","PeriodicalId":16920,"journal":{"name":"Journal of Radiation Research and Applied Sciences","volume":"18 1","pages":"Article 101309"},"PeriodicalIF":1.7000,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Radiation Research and Applied Sciences","FirstCategoryId":"103","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1687850725000214","RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

This study presents a multi-task learning framework based on the YOLOv11 architecture to improve both fracture detection and localization. The goal is to provide an efficient solution for clinical applications.

Materials and methods

We used a large dataset of X-ray images, including both fracture and non-fracture cases from the upper and lower extremities. The dataset was divided into three parts: training (70%), validation (15%), and test (15%). The training set had 10,966 cases (5778 normal, 5188 with fractures), while the validation and test sets each contained 2350 cases (1238 normal, 1112 with fractures). A multi-task learning model based on YOLOv11 was trained for fracture classification and localization. We applied data augmentation to prevent overfitting and improve generalization. Model performance was evaluated using two metrics: mean Average Precision (mAP) and Intersection over Union (IoU), with comparisons made to Faster R-CNN and SSD models. Training was done with a learning rate of 0.001 and a batch size of 16, using the Adam optimizer for better convergence. We also benchmarked the YOLOv11 model against Faster R-CNN and SSD to assess performance using mAP and IoU scores at different thresholds.

Results

The YOLOv11 model achieved excellent results, with a mean Average Precision (mAP) of 96.8% at an IoU threshold of 0.5 and an IoU of 92.5%. These results were better than Faster R-CNN (mAP: 87.5%, IoU: 85.23%) and SSD (mAP: 82.9%, IoU: 80.12%), showing that YOLOv11 outperformed these models in fracture detection and localization. This improvement highlights the model's strength and efficiency for real-time use.

Conclusions

The YOLOv11-based multi-task learning framework significantly outperforms traditional methods, offering high accuracy and real-time fracture localization. This model shows great potential for clinical use, improving diagnostic accuracy, increasing productivity, and streamlining the workflow for radiologists.
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
5.90%
发文量
130
审稿时长
16 weeks
期刊介绍: Journal of Radiation Research and Applied Sciences provides a high quality medium for the publication of substantial, original and scientific and technological papers on the development and applications of nuclear, radiation and isotopes in biology, medicine, drugs, biochemistry, microbiology, agriculture, entomology, food technology, chemistry, physics, solid states, engineering, environmental and applied sciences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信