Michael Fayemiwo, Bryan Gardiner, Jim Harkin, Liam McDaid, Punit Prakash, Michael Dennedy
{"title":"A Novel Pipeline for Adrenal Gland Segmentation: Integration of a Hybrid Post-Processing Technique with Deep Learning.","authors":"Michael Fayemiwo, Bryan Gardiner, Jim Harkin, Liam McDaid, Punit Prakash, Michael Dennedy","doi":"10.1007/s10278-025-01449-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01449-y","url":null,"abstract":"<p><p>Accurate segmentation of adrenal glands from CT images is essential for enhancing computer-aided diagnosis and surgical planning. However, the small size, irregular shape, and proximity to surrounding tissues make this task highly challenging. This study introduces a novel pipeline that significantly improves the segmentation of left and right adrenal glands by integrating advanced pre-processing techniques and a robust post-processing framework. Utilising a 2D UNet architecture with various backbones (VGG16, ResNet34, InceptionV3), the pipeline leverages test-time augmentation (TTA) and targeted removal of unconnected regions to enhance accuracy and robustness. Our results demonstrate a substantial improvement, with a 38% increase in the Dice similarity coefficient for the left adrenal gland and an 11% increase for the right adrenal gland on the AMOS dataset, achieved by the InceptionV3 model. Additionally, the pipeline significantly reduces false positives, underscoring its potential for clinical applications and its superiority over existing methods. These advancements make our approach a crucial contribution to the field of medical image segmentation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Somayeh Sadat Mehrnia, Zhino Safahi, Amin Mousavi, Fatemeh Panahandeh, Arezoo Farmani, Ren Yuan, Arman Rahmim, Mohammad R Salmanpour
{"title":"Landscape of 2D Deep Learning Segmentation Networks Applied to CT Scan from Lung Cancer Patients: A Systematic Review.","authors":"Somayeh Sadat Mehrnia, Zhino Safahi, Amin Mousavi, Fatemeh Panahandeh, Arezoo Farmani, Ren Yuan, Arman Rahmim, Mohammad R Salmanpour","doi":"10.1007/s10278-025-01458-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01458-x","url":null,"abstract":"<p><strong>Background: </strong>The increasing rates of lung cancer emphasize the need for early detection through computed tomography (CT) scans, enhanced by deep learning (DL) to improve diagnosis, treatment, and patient survival. This review examines current and prospective applications of 2D- DL networks in lung cancer CT segmentation, summarizing research, highlighting essential concepts and gaps; Methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines, a systematic search of peer-reviewed studies from 01/2020 to 12/2024 on data-driven population segmentation using structured data was conducted across databases like Google Scholar, PubMed, Science Direct, IEEE (Institute of Electrical and Electronics Engineers) and ACM (Association for Computing Machinery) library. 124 studies met the inclusion criteria and were analyzed.</p><p><strong>Results: </strong>The LIDC-LIDR dataset was the most frequently used; The finding particularly relies on supervised learning with labeled data. The UNet model and its variants were the most frequently used models in medical image segmentation, achieving Dice Similarity Coefficients (DSC) of up to 0.9999. The reviewed studies primarily exhibit significant gaps in addressing class imbalances (67%), underuse of cross-validation (21%), and poor model stability evaluations (3%). Additionally, 88% failed to address the missing data, and generalizability concerns were only discussed in 34% of cases.</p><p><strong>Conclusions: </strong>The review emphasizes the importance of Convolutional Neural Networks, particularly UNet, in lung CT analysis and advocates for a combined 2D/3D modeling approach. It also highlights the need for larger, diverse datasets and the exploration of semi-supervised and unsupervised learning to enhance automated lung cancer diagnosis and early detection.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial-Temporal Information Fusion for Thyroid Nodule Segmentation in Dynamic Contrast-Enhanced MRI: A Novel Approach.","authors":"Binze Han, Qian Yang, Xuetong Tao, Meini Wu, Long Yang, Wenming Deng, Wei Cui, Dehong Luo, Qian Wan, Zhou Liu, Na Zhang","doi":"10.1007/s10278-025-01463-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01463-0","url":null,"abstract":"<p><p>This study aims to develop a novel segmentation method that utilizes spatio-temporal information for segmenting two-dimensional thyroid nodules on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Leveraging medical morphology knowledge of the thyroid gland, we designed a semi-supervised segmentation model that first segments the thyroid gland, guiding the model to focus exclusively on the thyroid region. This approach reduces the complexity of nodule segmentation by filtering out irrelevant regions and artifacts. Then, we introduced a method to explicitly extract temporal information from DCE-MRI data and integrated this with spatial information. The fusion of spatial and temporal features enhances the model's robustness and accuracy, particularly in complex imaging scenarios. Experimental results demonstrate that the proposed method significantly improves segmentation performance across multiple state-of-the-art models. The Dice similarity coefficient (DSC) increased by 8.41%, 7.05%, 9.39%, 11.53%, 20.94%, 17.94%, and 15.65% for U-Net, U-Net + + , SegNet, TransUnet, Swin-Unet, SSTrans-Net, and VM-Unet, respectively, and significantly improved the segmentation accuracy of nodules of different sizes. These results highlight the effectiveness of our spatial-temporal approach in achieving accurate and reliable thyroid nodule segmentation, offering a promising framework for clinical applications and future research in medical image analysis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Tumor Segmentation in Breast-Conserving Surgery Using Deep Learning on Breast Tomosynthesis.","authors":"Wen-Pei Wu, Yu-Wen Chen, Hwa-Koon Wu, Dar-Ren Chen, Yu-Len Huang","doi":"10.1007/s10278-025-01457-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01457-y","url":null,"abstract":"<p><p>Breast cancer is one of the leading causes of cancer-related deaths among women worldwide, with approximately 2.3 million diagnoses and 685,000 deaths in 2020. Early-stage breast cancer is often managed through breast-conserving surgery (BCS) combined with radiation therapy, which aims to preserve the breast's appearance while reducing recurrence risks. This study aimed to enhance intraoperative tumor segmentation using digital breast tomosynthesis (DBT) during BCS. A deep learning model, specifically an improved U-Net architecture incorporating a convolutional block attention module (CBAM), was utilized to delineate tumor margins with high precision. The system was evaluated on 51 patient cases by comparing automated segmentation with manually delineated contours and pathological assessments. Results showed that the proposed method achieved promising accuracy, with Intersection over Union (IoU) and Dice coefficients of 0.866 and 0.928, respectively, demonstrating its potential to improve intraoperative margin assessment and surgical outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua Volin, Vidya Viswanathan, Peter Harri, Colin Segovis, Nabile Safdar, Elias Kikano
{"title":"Utilization of an Electronic Health Record Embedded Enterprise Health Data Exchange: A Single Institute Experience.","authors":"Joshua Volin, Vidya Viswanathan, Peter Harri, Colin Segovis, Nabile Safdar, Elias Kikano","doi":"10.1007/s10278-025-01459-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01459-w","url":null,"abstract":"<p><p>Evaluate the demand, volume, and institutional utilization of an EHR-based platform for health data exchange in a single US academic system. A retrospective review (3/2023-4/2024) spanned 11 hospitals and over 500 outpatient sites. Analytic reports from the Epic Care Everywhere Image Exchange Advanced Platform (Verona, WI) captured inbound (requested internally) and outbound (accessed externally) data volumes, including thumbnails and subsequent reference-quality key image retrievals. Data types were categorized as DICOM or non-DICOM (e.g., JPG, PDF). Counts, retrieval rates, and geographic patterns were analyzed. Inbound data involved 1.6 million patients from 370 external organizations, generating 5% for DICOM and 3% for non-DICOM; 77% of inbound thumbnails originated in-state. From 3/2023 to 4/2024, 1.9 million outbound thumbnails were accessed by 1478 external institutions, with a 78% reference-quality key image retrieval rate. Most outbound thumbnails were non-DICOM (66%); however, overall DICOM retrieval rates (5%) were higher than non-DICOM (3%). In-state institutions retrieved the most thumbnails (1,156,896/59%), whereas out-of-state sites requested more reference-quality key images (1,034,614/67%). Top users included various private and public health sectors. Findings show a growing reliance on EHR-embedded health data exchange. The abundance of inbound non-DICOM underscores interoperability challenges, while high DICOM retrieval rates emphasize clinical importance. Geographic disparities highlight the need for standardized solutions to improve continuity of care. Moreover, the upcoming HTI-2 bill mandates stronger data exchange measures, reinforcing the urgency for scalability. Health data exchange is in high demand as patients increasingly seek care across the US.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Appropriateness of Thyroid Nodule Cancer Risk Assessment and Management Recommendations Provided by Large Language Models.","authors":"Mohammad Alarifi","doi":"10.1007/s10278-025-01454-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01454-1","url":null,"abstract":"<p><p>The study evaluates the appropriateness and reliability of thyroid nodule cancer risk assessment recommendations provided by large language models (LLMs) ChatGPT, Gemini, and Claude in alignment with clinical guidelines from the American Thyroid Association (ATA) and the National Comprehensive Cancer Network (NCCN). A team comprising a medical imaging informatics specialist and two radiologists developed 24 clinically relevant questions based on ATA and NCCN guidelines. The readability of AI-generated responses was evaluated using the Readability Scoring System. A total of 322 radiologists in training or practice from the United States, recruited via Amazon Mechanical Turk, assessed the AI responses. Quantitative analysis using SPSS measured the appropriateness of recommendations, while qualitative feedback was analyzed through Dedoose. The study compared the performance of three AI models ChatGPT, Gemini, and Claude in providing appropriate recommendations. Paired samples t-tests showed no statistically significant differences in overall performance among the models. Claude achieved the highest mean score (21.84), followed closely by ChatGPT (21.83) and Gemini (21.47). Inappropriate response rates did not differ significantly, though Gemini showed a trend toward higher rates. However, ChatGPT achieved the highest accuracy (92.5%) in providing appropriate responses, followed by Claude (92.1%) and Gemini (90.4%). Qualitative feedback highlighted ChatGPT's clarity and structure, Gemini's accessibility but shallowness, and Claude's organization with occasional divergence from focus. LLMs like ChatGPT, Gemini, and Claude show potential in supporting thyroid nodule cancer risk assessment but require clinical oversight to ensure alignment with guidelines. Claude and ChatGPT performed nearly identically overall, with Claude having the highest mean score, though the difference was marginal. Further development is necessary to enhance their reliability for clinical use.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tapabrat Thakuria, Lipi B Mahanta, Sanjib Kumar Khataniar, Rahul Dev Goswami, Nevica Baruah, Trailokya Bharali
{"title":"Smartphone-Based Oral Lesion Image Segmentation Using Deep Learning.","authors":"Tapabrat Thakuria, Lipi B Mahanta, Sanjib Kumar Khataniar, Rahul Dev Goswami, Nevica Baruah, Trailokya Bharali","doi":"10.1007/s10278-025-01455-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01455-0","url":null,"abstract":"<p><p>Early detection of oral diseases, including and excluding cancer, is essential for improved outcomes. Segmentation of these lesions from the background is a crucial step in diagnosis, aiding clinicians in isolating affected areas and enhancing the accuracy of deep learning (DL) models. This study aims to develop a DL-based solution for segmenting oral lesions using smartphone-captured images. We designed a novel UNet-based model, OralSegNet, incorporating EfficientNetV2L as the encoder, along with Atrous Spatial Pyramid Pooling (ASPP) and residual blocks to enhance segmentation accuracy. The dataset consisted of 538 raw images with an average resolution of 1394 × 1524 pixels, along with corresponding annotated images of oral lesions. These images were pre-processed and resized to 256 × 256 pixels, and data augmentation techniques were applied to enhance the model's robustness. Our model achieved Dice coefficients of 0.9530 and 0.8518 and IoU scores of 0.9104 and 0.7550 in the validation and test phases, respectively, outperforming traditional and state-of-the-art models. The efficient architecture achieves the lowest FLOPS (34.30 GFLOPs) despite being the most parameter-heavy model (104.46 million). Given the widespread availability of smartphones, OralSegNet offers a cost-effective, non-invasive CNN model for clinicians, making early diagnosis accessible even in rural areas.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SSW-YOLO: Enhanced Blood Cell Detection with Improved Feature Extraction and Multi-scale Attention.","authors":"Hai Sun, Xiaorong Wan, Shouguo Tang, Yingna Li","doi":"10.1007/s10278-025-01460-3","DOIUrl":"https://doi.org/10.1007/s10278-025-01460-3","url":null,"abstract":"<p><p>The integration of deep learning in medical image analysis has driven significant progress, especially in the domain of automatic blood cell detection. While the YOLO series of algorithms have become widely adopted as a real-time object detection approach, there is a need for further refinement for the detection of small targets like blood cells and in low-resolution images. In this context, we introduce SSW-YOLO, a novel algorithm designed to tackle these challenges. The primary innovations of SSW-YOLO include the use of a spatial-to-depth convolution (SPD-Conv) layer to enhance feature extraction, the adoption of a Swin Transformer for multi-scale attention mechanisms, the simplification of the c2f module to reduce model complexity, and the utilization of Wasserstein distance loss (WDLoss) function to improve localization accuracy. With these enhancements, SSW-YOLO significantly improves the accuracy and efficiency of blood cell detection, reduces human error, and consequently accelerates the diagnosis of blood disorders while enhancing the precision of clinical diagnoses. Empirical analysis on the BCCD blood cell dataset indicates that SSW-YOLO achieves a mean average precision (mAP) of 94.0%, demonstrating superior performance compared to existing methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riel Castro-Zunti, Eun Hae Park, Hae Ni Park, Younhee Choi, Gong Yong Jin, Hee Suk Chae, Seok-Bum Ko
{"title":"Diagnosing Ankylosing Spondylitis via Architecture-Modified ResNet and Combined Conventional Magnetic Resonance Imagery.","authors":"Riel Castro-Zunti, Eun Hae Park, Hae Ni Park, Younhee Choi, Gong Yong Jin, Hee Suk Chae, Seok-Bum Ko","doi":"10.1007/s10278-025-01427-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01427-4","url":null,"abstract":"<p><p>Ankylosing spondylitis (AS), a lifelong inflammatory disease, leads to fusion of vertebrae and sacroiliac joints (SIJs) if undiagnosed. Conventional magnetic resonance imaging (MRI), e.g., T1w/T2w, is the diagnostic modality of choice for AS. However, computed tomography (CT)-a second-line modality-offers higher specificity because CT differentiates AS-relevant bony erosions/lesions better than MRI. We wished to ascertain whether MRI could be used to train/optimize convolutional neural networks (CNNs) for AS classification and which type of conventional MRI may dominate. We extracted 534 AS and 606 control SIJs from 56 patients with three simultaneously captured conventional MRI sequences. For classification, we compared modified/optimized variants of ResNet50, InceptionV3, and VGG16. CNNs were fine-tuned using 6-fold cross-validation and optimized architecturally and by learning rate. To automate SIJ extraction, we also developed a YOLOv5-based SIJ detector. Models trained on images that were the RGB combination of the MRI sequences significantly outperformed models trained on any one sequence ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). The best architecture, located via architectural decomposition, was the first 9 blocks of ResNet50. The reduced-parameters model, which met or exceeded the full architecture's performance in 83% less parameters, achieved a cross-validation test set accuracy, sensitivity, specificity, and ROC AUC of 95.26%, 96.25%, 94.39%, and 99.1%. Our SIJ detector achieved 96.88-99.88% mAP@0.5. Deep learning models successfully diagnose AS from control SIJs. Models trained on combined conventional MRI achieve high sensitivity and specificity, mitigating the need for radioactive CT.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chetana Krishnan, Ezinwanne Onuoha, Alex Hung, Kyung Hyun Sung, Harrison Kim
{"title":"Multi-attention Mechanism for Enhanced Pseudo-3D Prostate Zonal Segmentation.","authors":"Chetana Krishnan, Ezinwanne Onuoha, Alex Hung, Kyung Hyun Sung, Harrison Kim","doi":"10.1007/s10278-025-01401-0","DOIUrl":"10.1007/s10278-025-01401-0","url":null,"abstract":"<p><p>This study presents a novel pseudo-3D Global-Local Channel Spatial Attention (GLCSA) mechanism designed to enhance prostate zonal segmentation in high-resolution T2-weighted MRI images. GLCSA captures complex, multi-dimensional features while maintaining computational efficiency by integrating global and local attention in channel and spatial domains, complemented by a slice interaction module simulating 3D processing. Applied across various U-Net architectures, GLCSA was evaluated on two datasets: a proprietary set of 44 patients and the public ProstateX dataset of 204 patients. Performance, measured using the Dice Similarity Coefficient (DSC) and Mean Surface Distance (MSD) metrics, demonstrated significant improvements in segmentation accuracy for both the transition zone (TZ) and peripheral zone (PZ), with minimal parameter increase (1.27%). GLCSA achieved DSC increases of 0.74% and 11.75% for TZ and PZ, respectively, in the proprietary dataset. In the ProstateX dataset, improvements were even more pronounced, with DSC increases of 7.34% for TZ and 24.80% for PZ. Comparative analysis showed GLCSA-UNet performing competitively against other 2D, 2.5D, and 3D models, with DSC values of 0.85 (TZ) and 0.65 (PZ) on the proprietary dataset and 0.80 (TZ) and 0.76 (PZ) on the ProstateX dataset. Similarly, MSD values were 1.14 (TZ) and 1.21 (PZ) on the proprietary dataset and 1.48 (TZ) and 0.98 (PZ) on the ProstateX dataset. Ablation studies highlighted the effectiveness of combining channel and spatial attention and the advantages of global embedding over patch-based methods. In conclusion, GLCSA offers a robust balance between the detailed feature capture of 3D models and the efficiency of 2D models, presenting a promising tool for improving prostate MRI image segmentation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}