{"title":"Contrastive Learning vs. Self-Learning vs. Deformable Data Augmentation in Semantic Segmentation of Medical Images.","authors":"Hossein Arabi, Habib Zaidi","doi":"10.1007/s10278-024-01159-x","DOIUrl":"10.1007/s10278-024-01159-x","url":null,"abstract":"<p><p>To develop a robust segmentation model, encoding the underlying features/structures of the input data is essential to discriminate the target structure from the background. To enrich the extracted feature maps, contrastive learning and self-learning techniques are employed, particularly when the size of the training dataset is limited. In this work, we set out to investigate the impact of contrastive learning and self-learning on the performance of the deep learning-based semantic segmentation. To this end, three different datasets were employed used for brain tumor and hippocampus delineation from MR images (BraTS and Decathlon datasets, respectively) and kidney segmentation from CT images (Decathlon dataset). Since data augmentation techniques are also aimed at enhancing the performance of deep learning methods, a deformable data augmentation technique was proposed and compared with contrastive learning and self-learning frameworks. The segmentation accuracy for the three datasets was assessed with and without applying data augmentation, contrastive learning, and self-learning to individually investigate the impact of these techniques. The self-learning and deformable data augmentation techniques exhibited comparable performance with Dice indices of 0.913 ± 0.030 and 0.920 ± 0.022 for kidney segmentation, 0.890 ± 0.035 and 0.898 ± 0.027 for hippocampus segmentation, and 0.891 ± 0.045 and 0.897 ± 0.040 for lesion segmentation, respectively. These two approaches significantly outperformed the contrastive learning and the original model with Dice indices of 0.871 ± 0.039 and 0.868 ± 0.042 for kidney segmentation, 0.872 ± 0.045 and 0.865 ± 0.048 for hippocampus segmentation, and 0.870 ± 0.049 and 0.860 ± 0.058 for lesion segmentation, respectively. The combination of self-learning with deformable data augmentation led to a robust segmentation model with no outliers in the outcomes. This work demonstrated the beneficial impact of self-learning and deformable data augmentation on organ and lesion segmentation, where no additional training datasets are needed.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"3217-3230"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612072/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141302290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md Mahfuz Al Hasan, Saba Ghazimoghadam, Padcha Tunlayadechanont, Mohammed Tahsin Mostafiz, Manas Gupta, Antika Roy, Keith Peters, Bruno Hochhegger, Anthony Mancuso, Navid Asadizanjani, Reza Forghani
{"title":"Automated Segmentation of Lymph Nodes on Neck CT Scans Using Deep Learning.","authors":"Md Mahfuz Al Hasan, Saba Ghazimoghadam, Padcha Tunlayadechanont, Mohammed Tahsin Mostafiz, Manas Gupta, Antika Roy, Keith Peters, Bruno Hochhegger, Anthony Mancuso, Navid Asadizanjani, Reza Forghani","doi":"10.1007/s10278-024-01114-w","DOIUrl":"10.1007/s10278-024-01114-w","url":null,"abstract":"<p><p>Early and accurate detection of cervical lymph nodes is essential for the optimal management and staging of patients with head and neck malignancies. Pilot studies have demonstrated the potential for radiomic and artificial intelligence (AI) approaches in increasing diagnostic accuracy for the detection and classification of lymph nodes, but implementation of many of these approaches in real-world clinical settings would necessitate an automated lymph node segmentation pipeline as a first step. In this study, we aim to develop a non-invasive deep learning (DL) algorithm for detecting and automatically segmenting cervical lymph nodes in 25,119 CT slices from 221 normal neck contrast-enhanced CT scans from patients without head and neck cancer. We focused on the most challenging task of segmentation of small lymph nodes, evaluated multiple architectures, and employed U-Net and our adapted spatial context network to detect and segment small lymph nodes measuring 5-10 mm. The developed algorithm achieved a Dice score of 0.8084, indicating its effectiveness in detecting and segmenting cervical lymph nodes despite their small size. A segmentation framework successful in this task could represent an essential initial block for future algorithms aiming to evaluate small objects such as lymph nodes in different body parts, including small lymph nodes looking normal to the naked human eye but harboring early nodal metastases.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2955-2966"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612088/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141474456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anish Raj, Ahmad Allababidi, Hany Kayed, Andreas L H Gerken, Julia Müller, Stefan O Schoenberg, Frank G Zöllner, Johann S Rink
{"title":"Streamlining Acute Abdominal Aortic Dissection Management-An AI-based CT Imaging Workflow.","authors":"Anish Raj, Ahmad Allababidi, Hany Kayed, Andreas L H Gerken, Julia Müller, Stefan O Schoenberg, Frank G Zöllner, Johann S Rink","doi":"10.1007/s10278-024-01164-0","DOIUrl":"10.1007/s10278-024-01164-0","url":null,"abstract":"<p><p>Life-threatening acute aortic dissection (AD) demands timely diagnosis for effective intervention. To streamline intrahospital workflows, automated detection of AD in abdominal computed tomography (CT) scans seems useful to assist humans. We aimed at creating a robust convolutional neural network (CNN)-based pipeline capable of real-time screening for signs of abdominal AD in CT. In this retrospective study, abdominal CT data from AD patients presenting with AD and from non-AD patients were collected (n 195, AD cases 94, mean age 65.9 years, female ratio 35.8%). A CNN-based algorithm was developed with the goal of enabling a robust, automated, and highly sensitive detection of abdominal AD. Two sets from internal (n = 32, AD cases 16) and external sources (n = 1189, AD cases 100) were procured for validation. The abdominal region was extracted, followed by the automatic isolation of the aorta region of interest (ROI) and highlighting of the membrane via edge extraction, followed by classification of the aortic ROI as dissected/healthy. A fivefold cross-validation was employed on the internal set, and an ensemble of the 5 trained models was used to predict the internal and external validation set. Evaluation metrics included receiver operating characteristic curve (AUC) and balanced accuracy. The AUC, balanced accuracy, and sensitivity scores of the internal dataset were 0.932 (CI 0.891-0.963), 0.860, and 0.885, respectively. For the internal validation dataset, the AUC, balanced accuracy, and sensitivity scores were 0.887 (CI 0.732-0.988), 0.781, and 0.875, respectively. Furthermore, for the external validation dataset, AUC, balanced accuracy, and sensitivity scores were 0.993 (CI 0.918-0.994), 0.933, and 1.000, respectively. The proposed automated pipeline could assist humans in expediting acute aortic dissection management when integrated into clinical workflows.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2729-2739"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612133/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Zhou, Wenhan Yang, Limei Sun, Li Huang, Songshan Li, Xiaoling Luo, Yili Jin, Wei Sun, Wenjia Yan, Jing Li, Xiaoyan Ding, Yao He, Zhi Xie
{"title":"RDLR: A Robust Deep Learning-Based Image Registration Method for Pediatric Retinal Images.","authors":"Hao Zhou, Wenhan Yang, Limei Sun, Li Huang, Songshan Li, Xiaoling Luo, Yili Jin, Wei Sun, Wenjia Yan, Jing Li, Xiaoyan Ding, Yao He, Zhi Xie","doi":"10.1007/s10278-024-01154-2","DOIUrl":"10.1007/s10278-024-01154-2","url":null,"abstract":"<p><p>Retinal diseases stand as a primary cause of childhood blindness. Analyzing the progression of these diseases requires close attention to lesion morphology and spatial information. Standard image registration methods fail to accurately reconstruct pediatric fundus images containing significant distortion and blurring. To address this challenge, we proposed a robust deep learning-based image registration method (RDLR). The method consisted of two modules: registration module (RM) and panoramic view module (PVM). RM effectively integrated global and local feature information and learned prior information related to the orientation of images. PVM was capable of reconstructing spatial information in panoramic images. Furthermore, as the registration model was trained on over 280,000 pediatric fundus images, we introduced a registration annotation automatic generation process coupled with a quality control module to ensure the reliability of training data. We compared the performance of RDLR to the other methods, including conventional registration pipeline (CRP), voxel morph (WM), generalizable image matcher (GIM), and self-supervised techniques (SS). RDLR achieved significantly higher registration accuracy (average Dice score of 0.948) than the other methods (ranging from 0.491 to 0.802). The resulting panoramic retinal maps reconstructed by RDLR also demonstrated substantially higher fidelity (average Dice score of 0.960) compared to the other methods (ranging from 0.720 to 0.783). Overall, the proposed method addressed key challenges in pediatric retinal imaging, providing an effective solution to enhance disease diagnosis. Our source code is available at https://github.com/wuwusky/RobustDeepLeraningRegistration .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"3131-3145"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612083/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Histological Subtype Classification of Non-Small Cell Lung Cancer with Radiomics and 3D Convolutional Neural Networks.","authors":"Baoyu Liang, Chao Tong, Jingying Nong, Yi Zhang","doi":"10.1007/s10278-024-01152-4","DOIUrl":"10.1007/s10278-024-01152-4","url":null,"abstract":"<p><p>Non-small cell lung carcinoma (NSCLC) is the most common type of pulmonary cancer, one of the deadliest malignant tumors worldwide. Given the increased emphasis on the precise management of lung cancer, identifying various subtypes of NSCLC has become pivotal for enhancing diagnostic standards and patient prognosis. In response to the challenges presented by traditional clinical diagnostic methods for NSCLC pathology subtypes, which are invasive, rely on physician experience, and consume medical resources, we explore the potential of radiomics and deep learning to automatically and non-invasively identify NSCLC subtypes from computed tomography (CT) images. An integrated model is proposed that investigates both radiomic features and deep learning features and makes comprehensive decisions based on the combination of these two features. To extract deep features, a three-dimensional convolutional neural network (3D CNN) is proposed to fully utilize the 3D nature of CT images while radiomic features are extracted by radiomics. These two types of features are combined and classified with multi-head attention (MHA) in our proposed model. To our knowledge, this is the first work that integrates different learning methods and features from varied sources in histological subtype classification of lung cancer. Experiments are organized on a mixed dataset comprising NSCLC Radiomics and Radiogenomics. The results show that our proposed model achieves 0.88 in accuracy and 0.89 in the area under the receiver operating characteristic curve (AUC) when distinguishing lung adenocarcinoma (ADC) and lung squamous cell carcinoma (SqCC), indicating the potential of being a non-invasive way for predicting histological subtypes of lung cancer.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2895-2909"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612112/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141302291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikel Carrilero-Mardones, Manuela Parras-Jurado, Alberto Nogales, Jorge Pérez-Martín, Francisco Javier Díez
{"title":"Deep Learning for Describing Breast Ultrasound Images with BI-RADS Terms.","authors":"Mikel Carrilero-Mardones, Manuela Parras-Jurado, Alberto Nogales, Jorge Pérez-Martín, Francisco Javier Díez","doi":"10.1007/s10278-024-01155-1","DOIUrl":"10.1007/s10278-024-01155-1","url":null,"abstract":"<p><p>Breast cancer is the most common cancer in women. Ultrasound is one of the most used techniques for diagnosis, but an expert in the field is necessary to interpret the test. Computer-aided diagnosis (CAD) systems aim to help physicians during this process. Experts use the Breast Imaging-Reporting and Data System (BI-RADS) to describe tumors according to several features (shape, margin, orientation...) and estimate their malignancy, with a common language. To aid in tumor diagnosis with BI-RADS explanations, this paper presents a deep neural network for tumor detection, description, and classification. An expert radiologist described with BI-RADS terms 749 nodules taken from public datasets. The YOLO detection algorithm is used to obtain Regions of Interest (ROIs), and then a model, based on a multi-class classification architecture, receives as input each ROI and outputs the BI-RADS descriptors, the BI-RADS classification (with 6 categories), and a Boolean classification of malignancy. Six hundred of the nodules were used for 10-fold cross-validation (CV) and 149 for testing. The accuracy of this model was compared with state-of-the-art CNNs for the same task. This model outperforms plain classifiers in the agreement with the expert (Cohen's kappa), with a mean over the descriptors of 0.58 in CV and 0.64 in testing, while the second best model yielded kappas of 0.55 and 0.59, respectively. Adding YOLO to the model significantly enhances the performance (0.16 in CV and 0.09 in testing). More importantly, training the model with BI-RADS descriptors enables the explainability of the Boolean malignancy classification without reducing accuracy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2940-2954"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612129/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141461532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pranav Kulkarni, Adway Kanhere, Eliot L Siegel, Paul H Yi, Vishwa S Parekh
{"title":"ISLE: An Intelligent Streaming Framework for High-Throughput AI Inference in Medical Imaging.","authors":"Pranav Kulkarni, Adway Kanhere, Eliot L Siegel, Paul H Yi, Vishwa S Parekh","doi":"10.1007/s10278-024-01173-z","DOIUrl":"10.1007/s10278-024-01173-z","url":null,"abstract":"<p><p>As the adoption of artificial intelligence (AI) systems in radiology grows, the increase in demand for greater bandwidth and computational resources can lead to greater infrastructural costs for healthcare providers and AI vendors. To that end, we developed ISLE, an intelligent streaming framework to address inefficiencies in current imaging infrastructures. Our framework draws inspiration from video-on-demand platforms to intelligently stream medical images to AI vendors at an optimal resolution for inference from a single high-resolution copy using progressive encoding. We hypothesize that ISLE can dramatically reduce the bandwidth and computational requirements for AI inference, while increasing throughput (i.e., the number of scans processed by the AI system per second). We evaluate our framework by streaming chest X-rays for classification and abdomen CT scans for liver and spleen segmentation and comparing them with the original versions of each dataset. For classification, our results show that ISLE reduced data transmission and decoding time by at least 92% and 88%, respectively, while increasing throughput by more than 3.72 × . For both segmentation tasks, ISLE reduced data transmission and decoding time by at least 82% and 88%, respectively, while increasing throughput by more than 2.9 × . In all three tasks, the ISLE streamed data had no impact on the AI system's diagnostic performance (all P > 0.05). Therefore, our results indicate that our framework can address inefficiencies in current imaging infrastructures by improving data and computational efficiency of AI deployments in the clinical environment without impacting clinical decision-making using AI systems.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"3250-3263"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612124/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141474460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Expansive Receptive Field and Local Feature Extraction Network: Advancing Multiscale Feature Fusion for Breast Fibroadenoma Segmentation in Sonography.","authors":"Yongxin Guo, Yufeng Zhou","doi":"10.1007/s10278-024-01142-6","DOIUrl":"10.1007/s10278-024-01142-6","url":null,"abstract":"<p><p>Fibroadenoma is a common benign breast disease that affects women of all ages. Early diagnosis can greatly improve the treatment outcomes and reduce the associated pain. Computer-aided diagnosis (CAD) has great potential to improve diagnosis accuracy and efficiency. However, its application in sonography is limited. A network that utilizes expansive receptive fields and local information learning was proposed for the accurate segmentation of breast fibroadenomas in sonography. The architecture comprises the Hierarchical Attentive Fusion module, which conducts local information learning through channel-wise and pixel-wise perspectives, and the Residual Large-Kernel module, which utilizes multiscale large kernel convolution for global information learning. Additionally, multiscale feature fusion in both modules was included to enhance the stability of our network. Finally, an energy function and a data augmentation method were incorporated to fine-tune low-level features of medical images and improve data enhancement. The performance of our model is evaluated using both our local clinical dataset and a public dataset. Mean pixel accuracy (MPA) of 93.93% and 86.06% and mean intersection over union (MIOU) of 88.16% and 73.19% were achieved on the clinical and public datasets, respectively. They are significantly improved over state-of-the-art methods such as SegFormer (89.75% and 78.45% in MPA and 83.26% and 71.85% in MIOU, respectively). The proposed feature extraction strategy, combining local pixel-wise learning with an expansive receptive field for global information perception, demonstrates excellent feature learning capabilities. Due to this powerful and unique local-global feature extraction capability, our deep network achieves superior segmentation of breast fibroadenoma in sonography, which may be valuable in early diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2810-2824"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141185071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jason Belec, Justin Sutherland, Matthew Volpini, Kawan Rakhra, Dal Granville, Dan La Russa, Teresa Flaxman, Eduardo Portela De Oliveira, Rafael Glikstein, Marlise P Dos Santos, Joel Werier, Miller MacPherson, Richard I Aviv, Vimoj Nair
{"title":"A Pilot Clinical and Technical Validation of an Immersive Virtual Reality Platform for 3D Anatomical Modeling and Contouring in Support of Surgical and Radiation Oncology Treatment Planning.","authors":"Jason Belec, Justin Sutherland, Matthew Volpini, Kawan Rakhra, Dal Granville, Dan La Russa, Teresa Flaxman, Eduardo Portela De Oliveira, Rafael Glikstein, Marlise P Dos Santos, Joel Werier, Miller MacPherson, Richard I Aviv, Vimoj Nair","doi":"10.1007/s10278-024-01048-3","DOIUrl":"10.1007/s10278-024-01048-3","url":null,"abstract":"<p><p>The aim of this study was to validate a novel medical virtual reality (VR) platform used for medical image segmentation and contouring in radiation oncology and 3D anatomical modeling and simulation for planning medical interventions, including surgery. The first step of the validation was to verify quantitatively and qualitatively that the VR platform can produce substantially equivalent 3D anatomical models, image contours, and measurements to those generated with existing commercial platforms. To achieve this, a total of eight image sets and 18 structures were segmented using both VR and reference commercial platforms. The image sets were chosen to cover a broad range of scanner manufacturers, modalities, and voxel dimensions. The second step consisted of evaluating whether the VR platform could provide efficiency improvements for target delineation in radiation oncology planning. To assess this, the image sets for five pediatric patients with resected standard-risk medulloblastoma were used to contour target volumes in support of treatment planning of craniospinal irradiation, requiring complete inclusion of the entire cerebral-spinal volume. Structures generated in the VR and the commercial platforms were found to have a high degree of similarity, with dice similarity coefficient ranging from 0.963 to 0.985 for high-resolution images and 0.920 to 0.990 for lower resolution images. Volume, cross-sectional area, and length measurements were also found to be in agreement with reference values derived from a commercial system, with length measurements having a maximum difference of 0.22 mm, angle measurements having a maximum difference of 0.04°, and cross-sectional area measurements having a maximum difference of 0.16 mm<sup>2</sup>. The VR platform was also found to yield significant efficiency improvements, reducing the time required to delineate complex cranial and spinal target volumes by an average of 50% or 29 min.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"3009-3024"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612127/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bin Song, Tingting Zheng, Hao Wang, Lang Tang, Xiaoli Xie, Qingyin Fu, Weiyan Liu, Pu-Yeh Wu, Mengsu Zeng
{"title":"Prediction of Follicular Thyroid Neoplasm and Malignancy of Follicular Thyroid Neoplasm Using Multiparametric MRI.","authors":"Bin Song, Tingting Zheng, Hao Wang, Lang Tang, Xiaoli Xie, Qingyin Fu, Weiyan Liu, Pu-Yeh Wu, Mengsu Zeng","doi":"10.1007/s10278-024-01102-0","DOIUrl":"10.1007/s10278-024-01102-0","url":null,"abstract":"<p><p>The study aims to evaluate multiparametric magnetic resonance imaging (MRI) for differentiating Follicular thyroid neoplasm (FTN) from non-FTN and malignant FTN (MFTN) from benign FTN (BFTN). We retrospectively analyzed 702 postoperatively confirmed thyroid nodules, and divided them into training (n = 482) and validation (n = 220) cohorts. The 133 FTNs were further split into BFTN (n = 116) and MFTN (n = 17) groups. Employing univariate and multivariate logistic regression, we identified independent predictors of FTN and MFTN, and subsequently develop a nomogram for FTN and a risk score system (RSS) for MFTN prediction. We assessed performance of nomogram through its discrimination, calibration, and clinical utility. The diagnostic performance of the RSS for MFTN was further compared with the performance of the Thyroid Imaging Reporting and Data System (TIRADS). The nomogram, integrating independent predictors, demonstrated robust discrimination and calibration in differentiating FTN from non-FTN in both training cohort (AUC = 0.947, Hosmer-Lemeshow P = 0.698) and validation cohort (AUC = 0.927, Hosmer-Lemeshow P = 0.088). Key risk factors for differentiating MFTN from BFTN included tumor size, restricted diffusion, and cystic degeneration. The AUC of the RSS for MFTN prediction was 0.902 (95% CI 0.798-0.971), outperforming five TIRADS with a sensitivity of 73.3%, specificity of 95.1%, accuracy of 92.4%, and positive and negative predictive values of 68.8% and 96.1%, respectively, at the optimal cutoff. MRI-based models demonstrate excellent diagnostic performance for preoperative predicting of FTN and MFTN, potentially guiding clinicians in optimizing therapeutic decision-making.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2852-2864"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612114/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141263624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}