{"title":"基于深度学习的双通道特征融合算法的桡骨远端骨折检测。","authors":"Jin Li, Hao-Jie Shan, Xiao-Wei Yu","doi":"10.1016/j.cjtee.2024.10.006","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Distal radius fracture is a common trauma fracture and timely preoperative diagnosis is crucial for the patient's recovery. With the rise of deep-learning applications in the medical field, utilizing deep-learning for diagnosing distal radius fractures has become a significant topic. However, previous research has suffered from low detection accuracy and poor identification of occult fractures. This study aims to design an improved deep-learning model to assist surgeons in diagnosing distal radius fractures more quickly and accurately.</p><p><strong>Methods: </strong>This study, inspired by the comprehensive analysis of anteroposterior and lateral X-ray images by surgeons in diagnosing distal radius fractures, designs a dual-channel feature fusion network for detecting distal radius fractures. Based on the Faster region-based convolutional neural network framework, an additional Residual Network 50, which is integrated with the Deformable and Separable Attention mechanism, was introduced to extract semantic information from lateral X-ray images of the distal radius. The features extracted from the 2 channels were then combined via feature fusion, thus enriching the network's feature information. The focal loss function was also employed to address the sample imbalance problem during the training process.The selection of cases in this study was based on distal radius X-ray images retrieved from the hospital's imaging database, which met the following criteria: inclusion criteria comprised clear anteroposterior and lateral X-ray images, which were diagnosed as distal radius fractures by experienced radiologists. The exclusion criteria encompassed poor image quality, the presence of severe multiple or complex fractures, as well as non-adult or special populations (e.g., pregnant women). All cases meeting the inclusion criteria were labeled as distal radius fracture cases for model training and evaluation. To assess the model's performance, this study employed several metrics, including accuracy, precision, recall, area under the precision-recall curve, and intersection over union.</p><p><strong>Results: </strong>The proposed dual-channel feature fusion network achieved an average precision (AP)50 of 98.5%, an AP75 of 78.4%, an accuracy of 96.5%, and a recall of 94.7%. When compared to traditional models, such as Faster region-based convolutional neural network, which achieved an AP50 of 94.1%, an AP75 of 70.6%, a precision of 91.1%, and a recall of 92.3%, our method shows notable improvements in all key metrics. Similarly, when compared to other classic object detection networks like You Only Look Once version 4 (AP50=95.2%, AP75=72.2 %, precision=91.2%, recall=92.4%) and You Only Look Once version 5s (AP50=95.1%, AP75=73.8%, precision=93.7%, recall=92.8%), the dual-channel feature fusion network outperforms them in precision, recall, and AP scores. These results highlight the superior accuracy and reliability of the proposed method, particularly in identifying both apparent and occult distal radius fractures, demonstrating its effectiveness in clinical applications where precise detection of subtle fractures is critical.</p><p><strong>Conclusion: </strong>This study found that combining anteroposterior and lateral X-ray images of the distal radius as input for deep-learning algorithms can more accurately and efficiently identify distal radius fractures, providing a reference for research on distal radius fractures.</p>","PeriodicalId":51555,"journal":{"name":"Chinese Journal of Traumatology","volume":" ","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fracture detection of distal radius using deep- learning-based dual-channel feature fusion algorithm.\",\"authors\":\"Jin Li, Hao-Jie Shan, Xiao-Wei Yu\",\"doi\":\"10.1016/j.cjtee.2024.10.006\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Distal radius fracture is a common trauma fracture and timely preoperative diagnosis is crucial for the patient's recovery. With the rise of deep-learning applications in the medical field, utilizing deep-learning for diagnosing distal radius fractures has become a significant topic. However, previous research has suffered from low detection accuracy and poor identification of occult fractures. This study aims to design an improved deep-learning model to assist surgeons in diagnosing distal radius fractures more quickly and accurately.</p><p><strong>Methods: </strong>This study, inspired by the comprehensive analysis of anteroposterior and lateral X-ray images by surgeons in diagnosing distal radius fractures, designs a dual-channel feature fusion network for detecting distal radius fractures. Based on the Faster region-based convolutional neural network framework, an additional Residual Network 50, which is integrated with the Deformable and Separable Attention mechanism, was introduced to extract semantic information from lateral X-ray images of the distal radius. The features extracted from the 2 channels were then combined via feature fusion, thus enriching the network's feature information. The focal loss function was also employed to address the sample imbalance problem during the training process.The selection of cases in this study was based on distal radius X-ray images retrieved from the hospital's imaging database, which met the following criteria: inclusion criteria comprised clear anteroposterior and lateral X-ray images, which were diagnosed as distal radius fractures by experienced radiologists. The exclusion criteria encompassed poor image quality, the presence of severe multiple or complex fractures, as well as non-adult or special populations (e.g., pregnant women). All cases meeting the inclusion criteria were labeled as distal radius fracture cases for model training and evaluation. To assess the model's performance, this study employed several metrics, including accuracy, precision, recall, area under the precision-recall curve, and intersection over union.</p><p><strong>Results: </strong>The proposed dual-channel feature fusion network achieved an average precision (AP)50 of 98.5%, an AP75 of 78.4%, an accuracy of 96.5%, and a recall of 94.7%. When compared to traditional models, such as Faster region-based convolutional neural network, which achieved an AP50 of 94.1%, an AP75 of 70.6%, a precision of 91.1%, and a recall of 92.3%, our method shows notable improvements in all key metrics. Similarly, when compared to other classic object detection networks like You Only Look Once version 4 (AP50=95.2%, AP75=72.2 %, precision=91.2%, recall=92.4%) and You Only Look Once version 5s (AP50=95.1%, AP75=73.8%, precision=93.7%, recall=92.8%), the dual-channel feature fusion network outperforms them in precision, recall, and AP scores. These results highlight the superior accuracy and reliability of the proposed method, particularly in identifying both apparent and occult distal radius fractures, demonstrating its effectiveness in clinical applications where precise detection of subtle fractures is critical.</p><p><strong>Conclusion: </strong>This study found that combining anteroposterior and lateral X-ray images of the distal radius as input for deep-learning algorithms can more accurately and efficiently identify distal radius fractures, providing a reference for research on distal radius fractures.</p>\",\"PeriodicalId\":51555,\"journal\":{\"name\":\"Chinese Journal of Traumatology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2025-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chinese Journal of Traumatology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1016/j.cjtee.2024.10.006\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chinese Journal of Traumatology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.cjtee.2024.10.006","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
引用次数: 0
摘要
目的:桡骨远端骨折是一种常见的创伤性骨折,术前及时诊断对患者的康复至关重要。随着深度学习在医学领域应用的兴起,利用深度学习诊断桡骨远端骨折已成为一个重要的课题。然而,以往的研究存在检测精度低、对隐匿性骨折识别不佳的问题。本研究旨在设计一种改进的深度学习模型,以帮助外科医生更快、更准确地诊断桡骨远端骨折。方法:本研究在综合分析外科医生诊断桡骨远端骨折的正侧位x线图像的基础上,设计了一种用于桡骨远端骨折检测的双通道特征融合网络。在Faster基于区域的卷积神经网络框架的基础上,引入了一个附加的残差网络50,该网络集成了可变形可分离注意机制,用于提取桡骨远端侧位x线图像的语义信息。然后将两个通道提取的特征进行特征融合,从而丰富了网络的特征信息。为了解决训练过程中的样本不平衡问题,还采用了焦点损失函数。本研究病例的选择基于从医院影像数据库中检索到的桡骨远端x线图像,符合以下标准:纳入标准包括清晰的正位和侧位x线图像,由经验丰富的放射科医生诊断为桡骨远端骨折。排除标准包括图像质量差,存在严重的多发或复杂骨折,以及非成人或特殊人群(如孕妇)。所有符合纳入标准的病例被标记为桡骨远端骨折病例进行模型训练和评估。为了评估模型的性能,本研究采用了几个指标,包括准确率、精密度、召回率、召回率曲线下的面积和交集超过联合。结果:双通道特征融合网络的平均准确率(AP)50为98.5%,AP75为78.4%,准确率为96.5%,召回率为94.7%。与传统的基于更快区域的卷积神经网络模型(AP50为94.1%,AP75为70.6%,精度为91.1%,召回率为92.3%)相比,我们的方法在所有关键指标上都有显著提高。同样,与You Only Look Once version 4 (AP50=95.2%, AP75= 72.2%,准确率=91.2%,召回率=92.4%)和You Only Look Once version 5 (AP50=95.1%, AP75=73.8%,准确率=93.7%,召回率=92.8%)等经典目标检测网络相比,双通道特征融合网络在准确率、召回率和AP得分上都优于它们。这些结果突出了该方法的准确性和可靠性,特别是在识别明显和隐性桡骨远端骨折方面,证明了其在临床应用中的有效性,其中精确检测细微骨折至关重要。结论:本研究发现,结合桡骨远端正侧位x线图像作为深度学习算法的输入,可以更准确、高效地识别桡骨远端骨折,为桡骨远端骨折的研究提供参考。
Fracture detection of distal radius using deep- learning-based dual-channel feature fusion algorithm.
Purpose: Distal radius fracture is a common trauma fracture and timely preoperative diagnosis is crucial for the patient's recovery. With the rise of deep-learning applications in the medical field, utilizing deep-learning for diagnosing distal radius fractures has become a significant topic. However, previous research has suffered from low detection accuracy and poor identification of occult fractures. This study aims to design an improved deep-learning model to assist surgeons in diagnosing distal radius fractures more quickly and accurately.
Methods: This study, inspired by the comprehensive analysis of anteroposterior and lateral X-ray images by surgeons in diagnosing distal radius fractures, designs a dual-channel feature fusion network for detecting distal radius fractures. Based on the Faster region-based convolutional neural network framework, an additional Residual Network 50, which is integrated with the Deformable and Separable Attention mechanism, was introduced to extract semantic information from lateral X-ray images of the distal radius. The features extracted from the 2 channels were then combined via feature fusion, thus enriching the network's feature information. The focal loss function was also employed to address the sample imbalance problem during the training process.The selection of cases in this study was based on distal radius X-ray images retrieved from the hospital's imaging database, which met the following criteria: inclusion criteria comprised clear anteroposterior and lateral X-ray images, which were diagnosed as distal radius fractures by experienced radiologists. The exclusion criteria encompassed poor image quality, the presence of severe multiple or complex fractures, as well as non-adult or special populations (e.g., pregnant women). All cases meeting the inclusion criteria were labeled as distal radius fracture cases for model training and evaluation. To assess the model's performance, this study employed several metrics, including accuracy, precision, recall, area under the precision-recall curve, and intersection over union.
Results: The proposed dual-channel feature fusion network achieved an average precision (AP)50 of 98.5%, an AP75 of 78.4%, an accuracy of 96.5%, and a recall of 94.7%. When compared to traditional models, such as Faster region-based convolutional neural network, which achieved an AP50 of 94.1%, an AP75 of 70.6%, a precision of 91.1%, and a recall of 92.3%, our method shows notable improvements in all key metrics. Similarly, when compared to other classic object detection networks like You Only Look Once version 4 (AP50=95.2%, AP75=72.2 %, precision=91.2%, recall=92.4%) and You Only Look Once version 5s (AP50=95.1%, AP75=73.8%, precision=93.7%, recall=92.8%), the dual-channel feature fusion network outperforms them in precision, recall, and AP scores. These results highlight the superior accuracy and reliability of the proposed method, particularly in identifying both apparent and occult distal radius fractures, demonstrating its effectiveness in clinical applications where precise detection of subtle fractures is critical.
Conclusion: This study found that combining anteroposterior and lateral X-ray images of the distal radius as input for deep-learning algorithms can more accurately and efficiently identify distal radius fractures, providing a reference for research on distal radius fractures.
期刊介绍:
Chinese Journal of Traumatology (CJT, ISSN 1008-1275) was launched in 1998 and is a peer-reviewed English journal authorized by Chinese Association of Trauma, Chinese Medical Association. It is multidisciplinary and designed to provide the most current and relevant information for both the clinical and basic research in the field of traumatic medicine. CJT primarily publishes expert forums, original papers, case reports and so on. Topics cover trauma system and management, surgical procedures, acute care, rehabilitation, post-traumatic complications, translational medicine, traffic medicine and other related areas. The journal especially emphasizes clinical application, technique, surgical video, guideline, recommendations for more effective surgical approaches.