Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Deep Learning Neural Network Based on PSO for Leukemia Cell Disease Diagnosis from Microscope Images. 基于粒子群算法的深度学习神经网络在显微镜下白血病细胞疾病诊断中的应用。
Journal of imaging informatics in medicine Pub Date : 2025-03-20 DOI: 10.1007/s10278-025-01474-x
Hamsa Almahdawi, Ayhan Akbas, Javad Rahebi
{"title":"Deep Learning Neural Network Based on PSO for Leukemia Cell Disease Diagnosis from Microscope Images.","authors":"Hamsa Almahdawi, Ayhan Akbas, Javad Rahebi","doi":"10.1007/s10278-025-01474-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01474-x","url":null,"abstract":"<p><p>Leukemia is a kind of cancer characterized by the proliferation of abnormal, immature White Blood Cells (WBCs) produced in the bone marrow, which subsequently circulate throughout the body. Prompt leukemia diagnosis is vital in determining the optimal treatment plan, as different types of leukemia require distinct treatments. Early detection is therefore instrumental in facilitating the use of the most effective therapies. The identification of leukemia cells from microscopic images is considered a challenging task due to the complexity of the image features. This paper presents a deep learning neural network approach that utilizes the Particle Swarm Optimization (PSO) method to diagnose leukemia cell disease from microscope images. Initially, deep learning is employed to extract features from the leukemia images, which are then optimized by the PSO method to select the most relevant features for machine learning. Three different machine learning algorithms, namely Decision Tree (DT), Support Vector Machine (SVM), and K-Nearest Neighbors (K-NN) methods, are utilized to analyze the selected features. The results of the experiments demonstrate PSO accuracies of 97.4%, 92.3%, and 85.9% for SVM, K-NN, and DT algorithms with GoogLeNet, respectively. The proposed method achieved accuracies of 100%, 94.9%, and 92.3% for SVM, K-NN, and DT methods respectively, with Ant Colony Optimization (ACO) feature extraction and ResNet-50 employed as revealed by the experimental results. These findings suggest that the proposed approach is a promising tool for accurate diagnosis of leukemia cell disease using microscopic images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Detection of Hydrocephalus in Pediatric Head Computed Tomography Using VGG 16 CNN Deep Learning Architecture and Based Automated Segmentation Workflow for Ventricular Volume Estimation. 基于VGG - 16 CNN深度学习架构和基于心室容量估计的自动分割工作流的小儿头部计算机断层扫描脑积水自动检测
Journal of imaging informatics in medicine Pub Date : 2025-03-19 DOI: 10.1007/s10278-025-01482-x
Hamza Sekkat, Abdellah Khallouqi, Omar El Rhazouani, Abdellah Halimi
{"title":"Automated Detection of Hydrocephalus in Pediatric Head Computed Tomography Using VGG 16 CNN Deep Learning Architecture and Based Automated Segmentation Workflow for Ventricular Volume Estimation.","authors":"Hamza Sekkat, Abdellah Khallouqi, Omar El Rhazouani, Abdellah Halimi","doi":"10.1007/s10278-025-01482-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01482-x","url":null,"abstract":"<p><p>Hydrocephalus, particularly congenital hydrocephalus in infants, remains underexplored in deep learning research. While deep learning has been widely applied to medical image analysis, few studies have specifically addressed the automated classification of hydrocephalus. This study proposes a convolutional neural network (CNN) model based on the VGG16 architecture to detect hydrocephalus in infant head CT images. The model integrates an automated method for ventricular volume extraction, applying windowing, histogram equalization, and thresholding techniques to segment the ventricles from surrounding brain structures. Morphological operations refine the segmentation and contours are extracted for visualization and volume measurement. The dataset consists of 105 head CT scans, each with 60 slices covering the ventricular volume, resulting in 6300 slices. Manual segmentation by three trained radiologists served as the reference standard. The automated method showed a high correlation with manual measurements, with R<sup>2</sup> values ranging from 0.94 to 0.99. The mean absolute percentage error (MAPE) ranged 3.99 to 11.13%, while the root mean square error (RRMSE) from 4.56 to 13.74%. To improve model robustness, the dataset was preprocessed, normalized, and augmented with rotation, shifting, zooming, and flipping. The VGG16-based CNN used pre-trained convolutional layers with additional fully connected layers for classification, predicting hydrocephalus or normal labels. Performance evaluation using a multi-split strategy (15 independent splits) achieved a mean accuracy of 90.4% ± 1.2%. This study presents an automated approach for ventricular volume extraction and hydrocephalus detection, offering a promising tool for clinical and research applications with high accuracy and reduced observer bias.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143665814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiology AI Lab: Evaluation of Radiology Applications with Clinical End-Users. 放射学人工智能实验室:临床最终用户放射学应用评估。
Journal of imaging informatics in medicine Pub Date : 2025-03-17 DOI: 10.1007/s10278-025-01453-2
Olivier Paalvast, Merlijn Sevenster, Omar Hertgers, Hubrecht de Bliek, Victor Wijn, Vincent Buil, Jaap Knoester, Sandra Vosbergen, Hildo Lamb
{"title":"Radiology AI Lab: Evaluation of Radiology Applications with Clinical End-Users.","authors":"Olivier Paalvast, Merlijn Sevenster, Omar Hertgers, Hubrecht de Bliek, Victor Wijn, Vincent Buil, Jaap Knoester, Sandra Vosbergen, Hildo Lamb","doi":"10.1007/s10278-025-01453-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01453-2","url":null,"abstract":"<p><p>Despite the approval of over 200 artificial intelligence (AI) applications for radiology in the European Union, widespread adoption in clinical practice remains limited. Current assessments of AI applications often rely on post-hoc evaluations, lacking the granularity to capture real-time radiologist-AI interactions. The purpose of the study is to realise the Radiology AI lab for real-time, objective measurement of the impact of AI applications on radiologists' workflows. We proposed the user-state sensing framework (USSF) to structure the sensing of radiologist-AI interactions in terms of personal, interactional, and contextual states. Guided by the USSF, a lab was established using three non-invasive biometric measurement techniques: eye-tracking, heart rate monitoring, and facial expression analysis. We conducted a pilot test with four radiologists of varying experience levels, who read ultra-low-dose (ULD) CT cases in (1) standard PACS and (2) manually annotated (to mimic AI) PACS workflows. Interpretation time, eye-tracking metrics, heart rate variability (HRV), and facial expressions were recorded and analysed. The Radiology AI lab was successfully realised as an initial physical iteration of the USSF at a tertiary referral centre. Radiologists participating in the pilot test read 32 ULDCT cases (mean age, 52 years ± 23 (SD); 17 male; 16 cases with abnormalities). Cases were read on average in 4.1 ± 2.2 min (standard PACS) and 3.9 ± 1.9 min (AI-annotated PACS), with no significant difference (p = 0.48). Three out of four radiologists showed significant shifts (p < 0.02) in eye-tracking metrics, including saccade duration, saccade quantity, fixation duration, fixation quantity, and pupil diameter, when using the AI-annotated workflow. These changes align with prior findings linking such metrics to increased competency and reduced cognitive load, suggesting a more efficient visual search strategy in AI-assisted interpretation. Although HRV metrics did not correlate with experience, when combined with facial expression analysis, they helped identify key moments during the pilot test. The Radiology AI lab was successfully realised, implementing personal, interactional, and contextual states of the user-state sensing framework, enabling objective analysis of radiologists' workflows, and effectively capturing relevant biometrics. Future work will focus on expanding sensing of the contextual state of the user-state sensing framework, refining baseline determination, and continuing investigation of AI-enabled tools in radiology workflows.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143653020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lumos: Software for Multi-level Multi-reader Comparison of Cardiovascular Magnetic Resonance Late Gadolinium Enhancement Scar Quantification. Lumos:多层次多读卡器比较心血管磁共振晚期钆增强疤痕量化的软件。
Journal of imaging informatics in medicine Pub Date : 2025-03-17 DOI: 10.1007/s10278-025-01437-2
Philine Reisdorf, Jonathan Gavrysh, Clemens Ammann, Maximilian Fenski, Christoph Kolbitsch, Steffen Lange, Anja Hennemuth, Jeanette Schulz-Menger, Thomas Hadler
{"title":"Lumos: Software for Multi-level Multi-reader Comparison of Cardiovascular Magnetic Resonance Late Gadolinium Enhancement Scar Quantification.","authors":"Philine Reisdorf, Jonathan Gavrysh, Clemens Ammann, Maximilian Fenski, Christoph Kolbitsch, Steffen Lange, Anja Hennemuth, Jeanette Schulz-Menger, Thomas Hadler","doi":"10.1007/s10278-025-01437-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01437-2","url":null,"abstract":"<p><p>Cardiovascular magnetic resonance imaging (CMR) offers state-of-the-art myocardial tissue differentiation. The CMR technique late gadolinium enhancement (LGE) currently provides the noninvasive gold standard for the detection of myocardial fibrosis. Typically, thresholding methods are used for fibrotic scar tissue quantification. A major challenge for standardized CMR assessment is large variations in the estimated scar for different methods. The aim was to improve quality assurance for LGE scar quantification, a multi-reader comparison tool \"Lumos\" was developed to support quality control for scar quantification methods. The thresholding methods and an exact rasterization approach were implemented, as well as a graphical user interface (GUI) with statistical and case-specific tabs. Twenty LGE cases were considered with half of them including artifacts and clinical results for eight scar quantification methods computed. Lumos was successfully implemented as a multi-level multi-reader comparison software, and differences between methods can be seen in the statistical results. Histograms visualize confounding effects of different methods. Connecting the statistical level with the case level allows for backtracking statistical differences to sources of differences in the threshold calculation. Being able to visualize the underlying groundwork for the different methods in the myocardial histogram gives the opportunity to identify causes for different thresholds. Lumos showed the differences in the clinical results between cases with artifacts and cases without artifacts. A video demonstration of Lumos is offered as supplementary material 1. Lumos allows for a multi-reader comparison for LGE scar quantification that offers insights into the origin of reader differences.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143653110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-Based 3D Liver Segmentation and Volumetric Analysis in Living Donor Data. 活体供体数据中基于人工智能的三维肝脏分割和体积分析。
Journal of imaging informatics in medicine Pub Date : 2025-03-14 DOI: 10.1007/s10278-025-01468-9
Sae Byeol Mun, Sang Tae Choi, Young Jae Kim, Kwang Gi Kim, Won Suk Lee
{"title":"AI-Based 3D Liver Segmentation and Volumetric Analysis in Living Donor Data.","authors":"Sae Byeol Mun, Sang Tae Choi, Young Jae Kim, Kwang Gi Kim, Won Suk Lee","doi":"10.1007/s10278-025-01468-9","DOIUrl":"https://doi.org/10.1007/s10278-025-01468-9","url":null,"abstract":"<p><p>This study investigated the application of deep learning for 3-dimensional (3D) liver segmentation and volumetric analysis in living donor liver transplantation. Using abdominal computed tomography data from 55 donors, this study aimed to evaluate the liver segmentation performance of various U-Net-based models, including 3D U-Net, RU-Net, DU-Net, and RDU-Net, before and after hepatectomy. Accurate liver volume measurement is critical in liver transplantation to ensure adequate functional recovery and minimize postoperative complications. The models were trained and validated using a fivefold cross-validation approach. Performance metrics such as Dice similarity coefficient (DSC), recall, specificity, precision, and accuracy were used to assess the segmentation results. The highest segmentation accuracy was achieved in preoperative images with a DSC of 95.73 ± 1.08%, while postoperative day 7 images showed the lowest performance with a DSC of 93.14 ± 2.10%. A volumetric analysis conducted to measure hepatic resection and regeneration rates revealed an average liver resection rate of 40.52 ± 8.89% and a regeneration rate of 13.50 ± 8.95% by postoperative day 63. A regression analysis was performed on the volumetric results of the artificial intelligence model's liver resection rate and regeneration rate, and all results were statistically significant at p < 0.0001. The results indicate high reliability and clinical applicability of deep learning models in accurately measuring liver volume and assessing regenerative capacity, thus enhancing the management and recovery of liver donors.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143635069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiomics with Ultrasound Radiofrequency Data for Improving Evaluation of Duchenne Muscular Dystrophy. 利用超声射频数据的放射组学改进对杜氏肌营养不良症的评估。
Journal of imaging informatics in medicine Pub Date : 2025-03-14 DOI: 10.1007/s10278-025-01450-5
Dong Yan, Qiang Li, Ya-Wen Chuang, Chia-Wei Lin, Jeng-Yi Shieh, Wen-Chin Weng, Po-Hsiang Tsui
{"title":"Radiomics with Ultrasound Radiofrequency Data for Improving Evaluation of Duchenne Muscular Dystrophy.","authors":"Dong Yan, Qiang Li, Ya-Wen Chuang, Chia-Wei Lin, Jeng-Yi Shieh, Wen-Chin Weng, Po-Hsiang Tsui","doi":"10.1007/s10278-025-01450-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01450-5","url":null,"abstract":"<p><p>Duchenne muscular dystrophy (DMD) is a rare and severe genetic neuromuscular disease, characterized by rapid progression and high mortality, highlighting the need for accurate ambulatory function assessment tools. Ultrasound imaging methods have been widely used for quantitative analysis. Radiomics, which converts medical images into data, combined with machine learning (ML), offers a promising solution. This study is aimed at utilizing radiomics to analyze different stages of data generated during B-mode image processing to evaluate the ambulatory function of DMD patients. The study included 85 participants, categorized into ambulatory and non-ambulatory groups based on their functional status. Ultrasound scans were utilized to capture backscattered radiofrequency data, which were then processed to generate envelope, normalized, and B-mode images. Radiomics analysis involved the manual segmentation of grayscale images and automatic feature extraction using specialized software, followed by feature selection using the maximal relevance and minimal redundancy method. The selected features were input into five ML algorithms, with model evaluation conducted via area under the receiver operating characteristic curve (AUROC). To ensure robustness, both leave-one-out cross-validation and repeated data splitting methods were employed. Additionally, multiple ML models were constructed and tested to assess their performance. The intensity values across all image types increased as walking ability declined, with significant differences observed between the ambulatory and non-ambulatory groups (p < 0.001). These groups exhibited similar diagnostic performance levels, with AUROC values below 0.8. However, radiofrequency (RF) images outperformed other types when radiomics was applied, notably achieving an AUROC value of 0.906. Additionally, combining multiple ML algorithms yielded a higher AUROC value of 0.912 using RF images as input. Radiomics analysis of RF data surpasses conventional B-mode imaging and other ultrasound-derived images in evaluating ambulatory function in DMD. Moreover, integrating multiple machine learning models further enhances classification performance. The proposed method in this study offers a promising framework for improving the accuracy and reliability of clinical follow-up evaluations, supporting more effective management of DMD. The code is available at https://github.com/Goldenyan/radiomicsUS .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143635070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Two-Stage Lightweight Deep Learning Framework for Mass Detection and Segmentation in Mammograms Using YOLOv5 and Depthwise SegNet. 使用 YOLOv5 和深度 SegNet 在乳房 X 线照片中进行肿块检测和分割的两阶段轻量级深度学习框架。
Journal of imaging informatics in medicine Pub Date : 2025-03-14 DOI: 10.1007/s10278-025-01471-0
Dimitris Manolakis, Paschalis Bizopoulos, Antonios Lalas, Konstantinos Votis
{"title":"A Two-Stage Lightweight Deep Learning Framework for Mass Detection and Segmentation in Mammograms Using YOLOv5 and Depthwise SegNet.","authors":"Dimitris Manolakis, Paschalis Bizopoulos, Antonios Lalas, Konstantinos Votis","doi":"10.1007/s10278-025-01471-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01471-0","url":null,"abstract":"<p><p>Ensuring strict medical data privacy standards while delivering efficient and accurate breast cancer segmentation is a critical challenge. This paper addresses this challenge by proposing a lightweight solution capable of running directly in the user's browser, ensuring that medical data never leave the user's computer. Our proposed solution consists of a two-stage model: the pre-trained nano YoloV5 variation handles the task of mass detection, while a lightweight neural network model of just 20k parameters and an inference time of 21 ms per image addresses the segmentation problem. This highly efficient model in terms of inference speed and memory consumption was created by combining well-known techniques, such as the SegNet architecture and depthwise separable convolutions. The detection model manages an mAP@50 equal to 50.3% on the CBIS-DDSM dataset and 68.2% on the INbreast dataset. Despite its size, our segmentation model produces high-performance levels on the CBIS-DDSM (81.0% IoU, 89.4% Dice) and INbreast (77.3% IoU, 87.0% Dice) dataset.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143635068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facilitating Radiograph Interpretation: Refined Generative Models for Precise Bone Suppression in Chest X-rays. 促进x线片解释:胸部x线精确骨抑制的精细生成模型。
Journal of imaging informatics in medicine Pub Date : 2025-03-13 DOI: 10.1007/s10278-025-01461-2
Samar Ibrahim, Sahar Selim, Mustafa Elattar
{"title":"Facilitating Radiograph Interpretation: Refined Generative Models for Precise Bone Suppression in Chest X-rays.","authors":"Samar Ibrahim, Sahar Selim, Mustafa Elattar","doi":"10.1007/s10278-025-01461-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01461-2","url":null,"abstract":"<p><p>Chest X-ray (CXR) is crucial for diagnosing lung diseases, especially lung nodules. Recent studies indicate that bones, such as ribs and clavicles, obscure 82 to 95% of undiagnosed lung cancers. The development of computer-aided detection (CAD) systems with automated bone suppression is vital to improve detection rates and support early clinical decision-making. Current bone suppression methods face challenges: they often depend on manual subtraction of bone-only images from CXRs, leading to inefficiency and poor generalization; there is significant information loss in data compression within deep convolutional end-to-end architectures; and a balance between model efficiency and accuracy has not been sufficiently achieved in existing research. We introduce a novel end-to-end architecture, the mask-guided model, to address these challenges. Leveraging the Pix2Pix framework, our model enhances computational efficiency by reducing parameter count by 92.5%. It features a rib mask-guided module with a mask encoder and cross-attention mechanism, which provides spatial constraints, reduces information loss during encoder compression, and preserves non-relevant areas. An ablation study evaluates the impact of various factors. The model undergoes initial training on digitally reconstructed radiographs (DRRs) derived from CT projections for bone suppression and is fine-tuned on the JSRT dataset to accelerate convergence. The mask-guided model surpasses previous state-of-the-art methods, showing superior bone suppression performance in terms of structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and processing speed. It achieves an SSIM of 0.99 ± 0.002 and a PSNR of 36.14 ± 1.13 on the JSRT dataset. This study underscores the proposed model's effectiveness compared to existing methods, showcasing its capability to reduce model size and increase accuracy. This makes it well-suited for deployment in affordable, low-power hardware devices across various clinical settings.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143627331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Thyroid Nodule Ultrasound Image Grading Model Integrating Medical Prior Knowledge. 基于医学先验知识的甲状腺结节超声图像分级模型。
Journal of imaging informatics in medicine Pub Date : 2025-03-10 DOI: 10.1007/s10278-024-01120-y
Hua Chen, Chong Liu, Xiaoshi Cheng, Chenjun Jiang, Ying Wang
{"title":"A Thyroid Nodule Ultrasound Image Grading Model Integrating Medical Prior Knowledge.","authors":"Hua Chen, Chong Liu, Xiaoshi Cheng, Chenjun Jiang, Ying Wang","doi":"10.1007/s10278-024-01120-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01120-y","url":null,"abstract":"<p><p>In recent years, there has been increasing research on computer-aided diagnosis (CAD) using deep learning and image processing techniques. Still, most studies have focused on the benign-malignant classification of nodules. In this study, we propose an integrated architecture for grading thyroid nodules based on the Chinese Thyroid Imaging Reporting and Data System (C-TIRADS). The method combines traditional handcrafted features with deep features in the extraction process. In the preprocessing stage, a pseudo-artifact removal algorithm based on the fast marching method (FMM) is employed, followed by a hybrid median filtering for noise reduction. Contrast-limited adaptive histogram equalization is used for contrast enhancement to restore and enhance the information in ultrasound images. In the feature extraction stage, the improved ShuffleNetV2 network with multi-head self-attention mechanism is selected, and its extracted features are fused with medical prior knowledge features. Finally, a multi-class classification task is performed using the eXtreme Gradient Boosting (XGBoost) classifier. The dataset used in this study consists of 922 original images, including 149 examples belonging to class 2, 140 examples to class 3, 156 examples to class 4A, 114 examples to class 4B, 123 examples to class 4C, and 240 examples to class 5. The model is trained for 2000 epochs. The accuracy, precision, recall, F1 score, and AUC value of the proposed method are 97.17%, 97.65%, 97.17%, 0.9834, and 0.9855, respectively. The results demonstrate that the fusion of medical prior knowledge based on C-TIRADS and deep features from convolutional neural networks can effectively improve the overall performance of thyroid nodule diagnosis, providing a new feasible solution for developing clinical CAD systems for thyroid nodule ultrasound diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
I-BrainNet: Deep Learning and Internet of Things (DL/IoT)-Based Framework for the Classification of Brain Tumor. I-BrainNet:基于深度学习和物联网(DL/IoT)的脑肿瘤分类框架
Journal of imaging informatics in medicine Pub Date : 2025-03-10 DOI: 10.1007/s10278-025-01470-1
Abdullahi Umar Ibrahim, Glodie Mpia Engo, Ibrahim Ame, Chidi Wilson Nwekwo, Fadi Al-Turjman
{"title":"I-BrainNet: Deep Learning and Internet of Things (DL/IoT)-Based Framework for the Classification of Brain Tumor.","authors":"Abdullahi Umar Ibrahim, Glodie Mpia Engo, Ibrahim Ame, Chidi Wilson Nwekwo, Fadi Al-Turjman","doi":"10.1007/s10278-025-01470-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01470-1","url":null,"abstract":"<p><p>Brain tumor is categorized as one of the most fatal form of cancer due to its location and difficulty in terms of diagnostics. Medical expert relies on two key approaches which include biopsy and MRI. However, these techniques have several setbacks which include the need of medical experts, inaccuracy, miss-diagnosis as a result of anxiety or workload which may lead to patient morbidity and mortality. This opens a gap for the need of precise diagnosis and staging to guide appropriate clinical decisions. In this study, we proposed the application of deep learning (DL)-based techniques for the classification of MRI vs non-MRI and tumor vs no tumor. In order to accurately discriminate between classes, we acquired brain tumor multimodal image (CT and MRI) datasets, which comprises of 9616 MRI and CT scans in which 8000 are selected for discrimination between MRI and non-MRI and 4000 for the discrimination between tumor and no tumor cases. The acquired images undergo image pre-processing, data split, data augmentation and model training. The images are trained using 4 DL networks which include MobileNetV2, ResNet, Ineptionv3 and VGG16. Performance evaluation of the DL architectures and comparative analysis has shown that pre-trained MobileNetV2 achieved the best result across all metrics with 99.94% accuracy for the discrimination between MRI and non-MRI and 99.00% for the discrimination between tumor and no tumor. Moreover, I-BrainNet which is a DL/IoT-based framework is developed for the real-time classification of brain tumor.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信