Efe Ozkaya, Heriberto A Nieves-Vazquez, Murat Yuce, Kazuya Yasokawa, Emre Altinmakas, Jun Ueda, Bachir Taouli
{"title":"Automated liver magnetic resonance elastography quality control and liver stiffness measurement using deep learning.","authors":"Efe Ozkaya, Heriberto A Nieves-Vazquez, Murat Yuce, Kazuya Yasokawa, Emre Altinmakas, Jun Ueda, Bachir Taouli","doi":"10.1007/s00261-025-04883-2","DOIUrl":"https://doi.org/10.1007/s00261-025-04883-2","url":null,"abstract":"<p><strong>Purpose: </strong>Magnetic resonance elastography (MRE) measures liver stiffness for fibrosis staging, but its utility can be hindered by quality control (QC) challenges and measurement variability. The objective of the study was to fully automate liver MRE QC and liver stiffness measurement (LSM) using a deep learning (DL) method.</p><p><strong>Methods: </strong>In this retrospective, single center, IRB-approved human study, a curated dataset involved 897 MRE magnitude slices from 146 2D MRE scans [1.5 T and 3 T MRI, 2D Gradient Echo (GRE), and 2D Spin Echo-Echo Planar Imaging (SE-EPI)] of 69 patients (37 males, mean age 51.6 years). A SqueezeNet-based binary QC model was trained using combined and individual inputs of MRE magnitude slices and their 2D Fast-Fourier transforms to detect artifacts from patient motion, aliasing, and blurring. Three independent observers labeled MRE magnitude images as 0 (non-diagnostic quality) or 1 (diagnostic quality) to create a reference standard. A 2D U-Net segmentation model was trained on diagnostic slices with liver masks to support LSM. Intersection over union between the predicted segmentation and confidence masks identified measurable areas for LSM on elastograms. Cohen's unweighted Kappa coefficient, mean LSM error (%), and intra-class correlation coefficient were calculated to compare the DL-assisted approach with the observers' annotations. An efficiency analysis compared the DL-assisted vs manual LSM durations.</p><p><strong>Results: </strong>The top QC ensemble model (using MRE magnitude alone) achieved accuracy, precision, and recall of 0.958, 0.982, and 0.886, respectively. The mean LSM error between the DL-assisted approach and the reference standard was 1.9% ± 4.6%. DL-assisted approach completed LSM for 29 diagnostic slices in under 1 s, compared to 20 min manually.</p><p><strong>Conclusion: </strong>An automated DL-based classification of liver MRE diagnostic quality, liver segmentation, and LSM approach demonstrates a promising high performance, with potential for clinical adoption.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143633149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development and validation of a nomogram model based on vascular entry sign for predicting lymphovascular invasion in gastric cancer.","authors":"Jing Zhang, Peng-Hui Shen, Jun-Bo Wu, Qin Feng, Xiao-Ling Zhang, Rui-Na Jin, Yin-Hao Yang, Mei-Xi Zhou, Wen-Yu Tan, Jian Hou, Qin-Meng Yi, Tian-Mei Hou, Yong-Ai Li, Wen-Qing Hu","doi":"10.1007/s00261-025-04812-3","DOIUrl":"https://doi.org/10.1007/s00261-025-04812-3","url":null,"abstract":"<p><strong>Background: </strong>To evaluate the predictive value of a nomogram based on the vascular entry sign for lymphovascular invasion in gastric cancer.</p><p><strong>Methods: </strong>A total of 135 patients with histopathologically confirmed gastric cancer from August 2021 to November 2022 were enrolled. All patients underwent contrast-enhanced CT scans. Utilizing a random number method, patients were randomly assigned to either a training dataset (n = 96) or a validation dataset (n = 39) in a 7:3 ratio. CT images and clinical characteristics of the patients were collected. Both univariate and multivariate analyses were conducted to identify independent factors influencing lymphovascular invasion in gastric cancer. A nomogram model was developed, and its diagnostic performance and clinical utility were assessed using receiver operating characterist (ROC) curves, calibration curves, and decision curve analysis (DCA).</p><p><strong>Results: </strong>The multivariate analysis revealed that the vascular entry sign, clinical T stage, and clinical N stage independently influenced the occurrence of factors for lymphovascular invasion in gastric cancer (P < 0.05). A predictive nomogram model was developed for determining LVI status in gastric cancer. The AUC of the nomogram model in the training dataset and validation dataset were 0.878 (95% CI: 0.808-0.948) and 0.866 (95% CI: 0.723-1.000), respectively. The calibration curve and decision curve showed that the model had good reliability and good clinical validity.</p><p><strong>Conclusion: </strong>The model established based on the factors of vascular entry sign, clinical T stage, and clinical N stage can effectively predict lymphovascular invasion in gastric cancer.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143612968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Publio Cesar Cavalcante Viana, Paulo Victor Alves Pinto, Natally Horvat, Marcelo Araújo Queiroz, Maurício Dener Cordeiro, Rafael Ferreira Coelho, Leonardo Cardili, Jose Pontes, Giovanni Guido Cerri, William Carlos Nahas
{"title":"The presence of prostate MRI-visible lesions at follow-up biopsy as a risk factor for histopathological upgrading during active surveillance.","authors":"Publio Cesar Cavalcante Viana, Paulo Victor Alves Pinto, Natally Horvat, Marcelo Araújo Queiroz, Maurício Dener Cordeiro, Rafael Ferreira Coelho, Leonardo Cardili, Jose Pontes, Giovanni Guido Cerri, William Carlos Nahas","doi":"10.1007/s00261-025-04871-6","DOIUrl":"https://doi.org/10.1007/s00261-025-04871-6","url":null,"abstract":"<p><strong>Objective: </strong>To prospectively determine the ability of visible lesions on multiparametric MRI (PI-RADS 4-5) and commonly used biomarkers to predict disease upgrading on rebiopsy in men with low-risk prostate cancer (PCa) enrolled in active surveillance (AS).</p><p><strong>Materials and methods: </strong>For this prospective study, approved by the Institutional Review Board (IRB), we selected consecutive patients with low-risk, low-grade, and localized prostate cancer (PCa) from our active surveillance (AS) program, who were enrolled between March 2014 and December 2020. Patients who had undergone previous prostate surgery, hormonal treatment, had contraindications for mpMRI, or transrectal ultrasound-guided (TRUS) biopsy were excluded from this study. All eligible patients underwent mpMRI at least 3 months after the initial biopsy, followed by MRI-targeted TRUS-guided re-biopsy within 12 months after enrollment. The mpMRI studies were evaluated by an experienced radiologist using the PI-RADS v2 classification. Statistical significance was determined by comparing the results from the MRI with the pathology data from rebiopsy.</p><p><strong>Results: </strong>There were 240 patients included. Overall upgrading rate was 41.2% (99/240), higher among patients classified as PIRADS 4 or 5 (77%). MRI sensitivity was 77.7% and specificity was 83.6% on re-biopsy. Visible lesion on mpMRI, PSA density and 3 + /12 positive cores at the first biopsy were good predictors of disease upgrade on rebiopsy. On our predictive model, patients with PI-RADS 4 or 5, PSA density > 0.15 ng/mL/cm<sup>3</sup>, and 3 + /12 positive cores at first biopsy had 92.4% chance of having clinically significant PCa.</p><p><strong>Conclusion: </strong>Patients in AS with PI-RADS 4 or 5 lesions, PSA density > 0.15 ng/mL/cm<sup>3</sup> and 3 + /12 positive cores at first biopsy have a high probability of having significant PCa on re-biopsy.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143612996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lingwei Li, Tongtong Liu, Peng Wang, Lianzheng Su, Lei Wang, Xinmiao Wang, Chidao Chen
{"title":"Multiple perception contrastive learning for automated ovarian tumor classification in CT images.","authors":"Lingwei Li, Tongtong Liu, Peng Wang, Lianzheng Su, Lei Wang, Xinmiao Wang, Chidao Chen","doi":"10.1007/s00261-025-04879-y","DOIUrl":"https://doi.org/10.1007/s00261-025-04879-y","url":null,"abstract":"<p><p>Ovarian cancer is among the most common malignant tumours in women worldwide, and early identification is essential for enhancing patient survival chances. The development of automated and trustworthy diagnostic techniques is necessary because traditional CT picture processing mostly depends on the subjective assessment of radiologists, which can result in variability. Deep learning approaches in medical image analysis have advanced significantly, particularly showing considerable promise in the automatic categorisation of ovarian tumours. This research presents an automated diagnostic approach for ovarian tumour CT images utilising supervised contrastive learning and a Multiple Perception Encoder (MP Encoder). The approach incorporates T-Pro technology to augment data diversity and simulates semantic perturbations to increase the model's generalisation capability. The incorporation of Multi-Scale Perception Module (MSP Module) and Multi-Attention Module (MA Module) enhances the model's sensitivity to the intricate morphology and subtle characteristics of ovarian tumours, resulting in improved classification accuracy and robustness, ultimately achieving an average classification accuracy of 98.43%. Experimental results indicate the method's exceptional efficacy in ovarian tumour classification, particularly in cases involving tumours with intricate morphology or worse picture quality, thereby markedly enhancing classification accuracy. This advanced deep learning framework proficiently tackles the complexities of ovarian tumour CT image interpretation, offering clinicians enhanced diagnostic support and aiding in the optimisation of early detection and treatment strategies for ovarian cancer.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143612971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JunQiang Lei, YongSheng Xu, YuanHui Zhu, ShanShan Jiang, Song Tian, Yi Zhu
{"title":"Automated detection of small hepatocellular carcinoma in cirrhotic livers: applying deep learning to Gd-EOB-DTPA-enhanced MRI.","authors":"JunQiang Lei, YongSheng Xu, YuanHui Zhu, ShanShan Jiang, Song Tian, Yi Zhu","doi":"10.1007/s00261-025-04853-8","DOIUrl":"https://doi.org/10.1007/s00261-025-04853-8","url":null,"abstract":"<p><strong>Objectives: </strong>To develop an automated deep learning (DL) methodology for detecting small hepatocellular carcinoma (sHCC) in cirrhotic livers, leveraging Gd-EOB-DTPA-enhanced MRI.</p><p><strong>Methods: </strong>The present retrospective study included a total of 120 patients with cirrhosis, comprising 78 patients with sHCC and 42 patients with non-HCC cirrhosis, who were selected through stratified sampling. The dataset was divided into training and testing sets (8:2 ratio). The nnU-Net exhibits enhanced capabilities in segmenting small objects. The segmentation performance was assessed using the Dice coefficient. The ability to distinguish between sHCC and non-HCC lesions was evaluated through ROC curves, AUC scores and P values. The case-level detection performance for sHCC was evaluated through several metrics: accuracy, sensitivity, and specificity.</p><p><strong>Results: </strong>The AUCs for distinguishing sHCC patients from non-HCC patients at the lesion level were 0.967 and 0.864 for the training and test cohorts, respectively, both of which were statistically significant at P < 0.001. At the case level, distinguishing between patients with sHCC and patients with cirrhosis resulted in accuracies of 92.5% (95% CI, 85.1-96.9%) and 81.5% (95% CI, 61.9-93.7%), sensitivities of 95.1% (95% CI, 86.3-99.0%) and 88.2% (95% CI, 63.6-98.5%), and specificities of 87.5% (95% CI, 71.0-96.5%) and 70% (95% CI, 34.8-93.3%) for the training and test sets, respectively.</p><p><strong>Conclusion: </strong>The DL methodology demonstrated its efficacy in detecting sHCC within a cohort of patients with cirrhosis.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143584144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A method of matching nodes between MRI and pathology with MRI-based 3D node map in rectal cancer.","authors":"Qing-Yang Li, Xin-Yue Yan, Zhen Guan, Rui-Jia Sun, Qiao-Yuan Lu, Xiao-Ting Li, Xiao-Yan Zhang, Ying-Shi Sun","doi":"10.1007/s00261-025-04826-x","DOIUrl":"10.1007/s00261-025-04826-x","url":null,"abstract":"<p><strong>Purpose: </strong>To propose a node-by-node matching method between MRI and pathology with 3D node maps based on preoperative MRI for rectal cancer patients to improve the yet unsatisfactory diagnostic performance of nodal status in rectal cancer.</p><p><strong>Methods: </strong>This methodological study prospectively enrolled consecutive participants with rectal cancer who underwent preoperative MRI and radical surgery from December 2021 to August 2023. All nodes with short-axis diameters of ≥ 3 mm within the mesorectum were regarded as target nodes and were localized in three directions based on the positional relationship on MRI and drawn on a node map with the primary tumor as the main reference, which was used as a template for node-by-node matching with pathological evaluation. Patient and nodal-level analyses were performed to investigate factors affecting the matching accuracy.</p><p><strong>Results: </strong>545 participants were included, of whom 253 received direct surgery and 292 received surgery after neoadjuvant therapy (NAT). In participants who underwent direct surgery, 1782 target nodes were identified on MRI, of which 1302 nodes (73%) achieved matching with pathology, with 1018 benign and 284 metastatic. In participants who underwent surgery after NAT, 1277 target nodes were identified and 918 nodes (72%) achieved matching, of which 689 were benign and 229 were metastatic. Advanced disease and proximity to primary tumor resulted in matching difficulties.</p><p><strong>Conclusion: </strong>An easy-to-use and reliable method of node-by-node matching between MRI and pathology with 3D node map based on preoperative MRI was constructed for rectal cancer, which provided reliable node-based ground-truth labels for further radiological studies.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143582157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to: Assessment of diagnostic performance and complication rate in percutaneous lung biopsy based on target nodule size.","authors":"Andrew W Bowman, Zhuo Li","doi":"10.1007/s00261-025-04820-3","DOIUrl":"10.1007/s00261-025-04820-3","url":null,"abstract":"","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143582158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The value of radiomics and deep learning based on PET/CT in predicting perineural nerve invasion in rectal cancer.","authors":"Mengzhang Jiao, Zongjing Ma, Zhaisong Gao, Yu Kong, Shumao Zhang, Guangjie Yang, Zhenguang Wang","doi":"10.1007/s00261-025-04833-y","DOIUrl":"https://doi.org/10.1007/s00261-025-04833-y","url":null,"abstract":"<p><strong>Objective: </strong>The objective of this study is to investigate the value of radiomics features and deep learning features based on positron emission tomography/computed tomography (PET/CT) in predicting perineural invasion (PNI) in rectal cancer.</p><p><strong>Methods: </strong>We retrospectively collected 120 rectal cancer (56 PNI-positive patients 64 PNI-negative patients) patients with preoperative <sup>18</sup>F-FDG PET/CT examination and randomly divided them into training and validation sets at a 7:3 ratio. We also collected 31 rectal cancer patients from two other hospitals as an independent external validation set. χ2 test and binary logistic regression were used to analyze PET metabolic parameters. PET/CT images were utilized to extract radiomics features and deep learning features. The Mann-Whitney U test and LASSO were employed to select valuable features. Metabolic parameter, radiomics, deep learning and combined models were constructed. ROC curves were generated to evaluate the performance of models.</p><p><strong>Results: </strong>The results indicate that metabolic tumor volume (MTV) is correlated with PNI (P = 0.001). In the training set and validation set, the AUC values of the metabolic parameter model were 0.673 (95%CI: 0.572-0.773), 0.748 (95%CI: 0.599-0.896). We selected 16 radiomics features and 17 deep learning features as valuable factors for predicting PNI. The AUC values of radiomics model and deep learning model were 0.768 (95%CI: 0.667-0.868) and 0.860 (95%CI: 0.780-0.940) in the training set. And the AUC values in the validation set were 0.803 (95%CI: 0.656-0.950) and 0.854 (95% CI 0.721-0.987). Finally, the combined model exhibited AUCs of 0.893 (95%CI: 0.825-0.961) in the training set and 0.883 (95%CI: 0.775-0.990) in the validation set. In the external validation set, the combined model achieved an AUC of 0.829 (95% CI: 0.674-0.984), outperforming each individual model. The decision curve analysis of these models indicated that using the combined model to guide treatment provided a substantial net benefit.</p><p><strong>Conclusions: </strong>This combined model established by integrating PET metabolic parameters, radiomics features, and deep learning features can accurately predict the PNI in rectal cancer.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143571615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced ISUP grade prediction in prostate cancer using multi-center radiomics data.","authors":"Yuying Liu, Xueqing Han, Haohui Chen, Qirui Zhang","doi":"10.1007/s00261-025-04858-3","DOIUrl":"https://doi.org/10.1007/s00261-025-04858-3","url":null,"abstract":"<p><strong>Background: </strong>To explore the predictive value of radiomics features extracted from anatomical ROIs in differentiating the International Society of Urological Pathology (ISUP) grading in prostate cancer patients.</p><p><strong>Methods: </strong>This study included 1,500 prostate cancer patients from a multi-center study. The peripheral zone (PZ) and central gland (CG, transition zone + central zone) of the prostate were segmented using deep learning algorithms and were defined as the regions of interest (ROI) in this study. A total of 12,918 image-based features were extracted from T2-weighted imaging (T2WI), apparent diffusion coefficient (ADC), and diffusion-weighted imaging (DWI) images of these two ROIs. Synthetic minority over-sampling technique (SMOTE) algorithm was used to address the class imbalance problem. Feature selection was performed using Pearson correlation analysis and random forest regression. A prediction model was built using the random forest classification algorithm. Kruskal-Wallis H test, ANOVA, and Chi-Square Test were used for statistical analysis.</p><p><strong>Results: </strong>A total of 20 ISUP grading-related features were selected, including 10 from the PZ ROI and 10 from the CG ROI. On the test set, the combined PZ + CG radiomics model exhibited better predictive performance, with an AUC of 0.928 (95% CI: 0.872, 0.966), compared to the PZ model alone (AUC: 0.838; 95% CI: 0.722, 0.920) and the CG model alone (AUC: 0.904; 95% CI: 0.851, 0.945).</p><p><strong>Conclusion: </strong>This study demonstrates that radiomic features extracted based on anatomical sub-region of the prostate can contribute to enhanced ISUP grade prediction. The combination of PZ + GG can provide more comprehensive information with improved accuracy. Further validation of this strategy in the future will enhance its prospects for improving decision-making in clinical settings.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143565695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DeepOptimalNet: optimized deep learning model for early diagnosis of pancreatic tumor classification in CT imaging.","authors":"T Thanya, T Jeslin","doi":"10.1007/s00261-025-04860-9","DOIUrl":"https://doi.org/10.1007/s00261-025-04860-9","url":null,"abstract":"<p><p>Computed Tomography (CT) imaging captures detailed cross-sectional images of the pancreas and surrounding structures and provides valuable information for medical professionals. The classification of pancreatic CT images presents significant challenges due to the complexities of pancreatic diseases, especially pancreatic cancer. These challenges include subtle variations in tumor characteristics, irregular tumor shapes, and intricate imaging features that hinder accurate and early diagnosis. Image noise and variations in image quality also complicate the analysis. To address these classification problems, advanced medical imaging techniques, optimization algorithms, and deep learning methodologies are often employed. This paper proposes a robust classification model called DeepOptimalNet, which integrates optimization algorithms and deep learning techniques to handle the variability in imaging characteristics and subtle variations associated with pancreatic tumors. The model uses a comprehensive approach to enhance the analysis of medical CT images, beginning with the application of the Gaussian smoothing filter (GSF) for noise reduction and feature enhancement. It introduces the Modified Remora Optimization Algorithm (MROA) to improve the accuracy and efficiency of pancreatic cancer tissue segmentation. The adaptability of modified optimization algorithms to specific challenges such as irregular tumor shapes is emphasized. The paper also utilizes Deep Transfer CNN with ResNet-50 (DTCNN) for feature extraction, leveraging transfer learning to enhance prediction accuracy in CT images. ResNet-50's strong feature extraction capabilities are particularly relevant to fault diagnosis in CT images. The focus then shifts to a Deep Cascade Convolutional Neural Network with Multimodal Learning (DCCNN-ML) for classifying pancreatic cancer in CT images. The DeepOptimalNet approach underscores the advantages of deep learning techniques, multimodal learning, and cascade architectures in addressing the complexity and subtle variations inherent in pancreatic cancer imaging, ultimately leading to more accurate and robust classifications. The proposed DeepOptimalNet achieves 99.3% accuracy, 99.1% sensitivity, 99.5% specificity, and 99.3% F-score, surpassing existing models in pancreatic tumor classification. Its MROA-based segmentation improves boundary delineation, while DTCNN with ResNet-50 enhances feature extraction for small and low-contrast tumors. Benchmark validation confirms its superior classification performance, reduced false positives, and improved diagnostic reliability compared to traditional deep learning methods.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143565773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}