Veysi Tekin, Muhammed Tekinhatun, Salih Taha Alperen Özçelik, Hüseyin Fırat, Hüseyin Üzen
{"title":"A Deep Learning-Based EffConvNeXt Model for Automatic Classification of Cystic Bronchiectasis: An Explainable AI Approach.","authors":"Veysi Tekin, Muhammed Tekinhatun, Salih Taha Alperen Özçelik, Hüseyin Fırat, Hüseyin Üzen","doi":"10.1007/s10278-025-01688-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01688-z","url":null,"abstract":"<p><p>Cystic bronchiectasis and pneumonia are respiratory conditions that significantly impact morbidity and mortality worldwide. Diagnosing these diseases accurately is crucial, as early detection can greatly improve patient outcomes. These diseases are respiratory conditions that present with overlapping features on chest X-rays (CXR), making accurate diagnosis challenging. Recent advancements in deep learning (DL) have improved diagnostic accuracy in medical imaging. This study proposes the EffConvNeXt model, a hybrid approach combining EfficientNetB1 and ConvNeXtTiny, designed to enhance classification accuracy for cystic bronchiectasis, pneumonia, and normal cases in CXRs. The model effectively balances EfficientNetB1's efficiency with ConvNeXtTiny's advanced feature extraction, allowing for better identification of complex patterns in CXR images. Additionally, the EffConvNeXt model combines EfficientNetB1 and ConvNeXtTiny, addressing limitations of each model individually: EfficientNetB1's SE blocks improve focus on critical image areas while keeping the model lightweight and fast, and ConvNeXtTiny enhances detection of subtle abnormalities, making the combined model highly effective for rapid and accurate CXR image analysis in clinical settings. For the performance analysis of the EffConvNeXt model, experimental studies were conducted using 5899 CXR images collected from Dicle University Medical Faculty. When used individually, ConvNeXtTiny achieved an accuracy rate of 97.12%, while EfficientNetB1 reached 97.79%. By combining both models, the EffConvNeXt raised the accuracy to 98.25%, showing a 0.46% improvement. With this result, the other tested DL models fell behind. These findings indicate that EffConvNeXt provides a reliable, automated solution for distinguishing cystic bronchiectasis and pneumonia, supporting clinical decision-making with enhanced diagnostic accuracy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145152732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emi Eastman, Barry D Pressman, Vu Nguyen, Lucien Zang, Yifang Zhou
{"title":"Facilitating Collaboration Across the Radiology Department by Expanding Access to Real-Time CT Protocols.","authors":"Emi Eastman, Barry D Pressman, Vu Nguyen, Lucien Zang, Yifang Zhou","doi":"10.1007/s10278-025-01658-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01658-5","url":null,"abstract":"<p><p>CT protocols can provide valuable information and guidance to radiologists, technologists, physicists, schedulers, and nurses. However, accessibility to up-to-date information is limited in practice. Variable protocol formats, vendor-specific parameters, and the lack of a structured protocol change process generate multiple asynchronous versions of protocols. In addition, the need for user-dependent protocol content further hinders the success of establishing a usable online protocol site. To address these challenges, a centralized intranet hub was developed by integrating four essential components: vendor-neutral technical protocol reformatting, machine-neutral clinical protocol templates, user-dependent customized interface design, and automated publication of protocols on the intranet. The improved workflow streamlined the process of creating, disseminating, and utilizing protocols. The standardization of protocols and accessibility of information have facilitated collaboration among multidisciplinary teams for effective CT operation in the radiology department.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145152708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tzu-Yi Chuang, Pin-Hsun Lian, Yu-Chen Kuo, Gary Han Chang
{"title":"Localizing Knee Pain via Explainable Bayesian Generative Models and Counterfactual MRI: Data from the Osteoarthritis Initiative.","authors":"Tzu-Yi Chuang, Pin-Hsun Lian, Yu-Chen Kuo, Gary Han Chang","doi":"10.1007/s10278-025-01678-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01678-1","url":null,"abstract":"<p><p>Osteoarthritis (OA) pain often does not correlate with magnetic resonance imaging (MRI)-detected structural abnormalities, limiting the clinical utility of traditional volume-based lesion assessments. To address this mismatch, we present a novel explainable artificial intelligence (XAI) framework that localizes pain-driving abnormalities in knee MR images via counterfactual image synthesis and Shapley-based feature attribution. Our method combines a Bayesian generative network-which is trained to synthesize asymptomatic versions of symptomatic knees-with a black-box pain classifier to generate counterfactual MRI scans. These counterfactuals, which are constrained by multimodal segmentation and uncertainty-aware inference, isolate lesion regions that are likely responsible for symptoms. Applying Shapley additive explanations (SHAP) to the output of the classifier enables the contribution of each lesion to pain to be precisely quantified. We trained and validated this framework on 2148 knee pairs obtained from a multicenter study of the Osteoarthritis Initiative (OAI), achieving high anatomical specificity in terms of identifying pain-relevant features such as patellar effusions and bone marrow lesions. An odds ratio (OR) analysis revealed that SHAP-derived lesion scores were significantly more strongly associated with pain than raw lesion volumes were (OR 6.75 vs. 3.73 in patellar regions), supporting the interpretability and clinical relevance of the model. Compared with conventional saliency methods and volumetric measures, our approach demonstrates superior lesion-level resolution and highlights the spatial heterogeneity of OA pain mechanisms. These results establish a new direction for conducting interpretable, lesion-specific MRI analyses that could guide personalized treatment strategies for musculoskeletal disorders.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145139995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TCF-Net: A Hierarchical Transformer Convolution Fusion Network for Prostate Cancer Segmentation in Transrectal Ultrasound Images.","authors":"Xu Lu, Qihao Zhou, Zhiwei Xiao, Yanqi Guo, Qianhong Peng, Shen Zhao, Shaopeng Liu, Jun Huang, Chuan Yang, Yuan Yuan","doi":"10.1007/s10278-025-01690-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01690-5","url":null,"abstract":"<p><p>Accurate prostate segmentation from transrectal ultrasound (TRUS) images is the key to the computer-aided diagnosis of prostate cancer. However, this task faces serious challenges, including various interferences, variational prostate shapes, and insufficient datasets. To address these challenges, a region-adaptive transformer convolution fusion net (TCF-Net) for accurate and robust segmentation of TRUS images is proposed. As a high-performance segmentation network, the TCF-Net contains a hierarchical encoder-decoder structure with two main modules: (1) a region-adaptive transformer-based encoder to identify and localize prostate regions, which learns the relationship between objects and pixels. It assists the model in overcoming various interferences and prostate shape variations. (2) A convolution-based decoder to improve the applicability to small datasets. Besides, a patch-based fusion module is also proposed to introduce an inductive bias for fine prostate segmentation. TCF-Net is trained and evaluated on a challenging clinical TRUS image dataset collected from the First Affiliated Hospital of Jinan University in China. The dataset contains 1000 TRUS images of 135 patients. Experimental results show that the mIoU of TCF-Net is 94.4%, which exceeds other state-of-the-art (SOTA) models by more than 1%.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145140016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saba Mehrtabar, Ahmed Marey, Anushka Desai, Abdelrahman M Saad, Vishal Desai, Julian Goñi, Basudha Pal, Muhammad Umair
{"title":"Ethical Considerations in Patient Privacy and Data Handling for AI in Cardiovascular Imaging and Radiology.","authors":"Saba Mehrtabar, Ahmed Marey, Anushka Desai, Abdelrahman M Saad, Vishal Desai, Julian Goñi, Basudha Pal, Muhammad Umair","doi":"10.1007/s10278-025-01656-7","DOIUrl":"https://doi.org/10.1007/s10278-025-01656-7","url":null,"abstract":"<p><p>The integration of artificial intelligence (AI) into cardiovascular imaging and radiology offers the potential to enhance diagnostic accuracy, streamline workflows, and personalize patient care. However, the rapid adoption of AI has introduced complex ethical challenges, particularly concerning patient privacy, data handling, informed consent, and data ownership. This narrative review explores these issues by synthesizing literature from clinical, technical, and regulatory perspectives. We examine the tensions between data utility and data protection, the evolving role of transparency and explainable AI, and the disparities in ethical and legal frameworks across jurisdictions such as the European Union, the USA, and emerging players like China. We also highlight the vulnerabilities introduced by cloud computing, adversarial attacks, and the use of commercial datasets. Ethical frameworks and regulatory guidelines are compared, and proposed mitigation strategies such as federated learning, blockchain, and differential privacy are discussed. To ensure ethical implementation, we emphasize the need for shared accountability among clinicians, developers, healthcare institutions, and policymakers. Ultimately, the responsible development of AI in medical imaging must prioritize patient trust, fairness, and equity, underpinned by robust governance and transparent data stewardship.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145139991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matteo Lai, Mario Mascalchi, Carlo Tessa, Stefano Diciotti
{"title":"Generating Brain MRI with StyleGAN2-ADA: The Effect of the Training Set Size on the Quality of Synthetic Images.","authors":"Matteo Lai, Mario Mascalchi, Carlo Tessa, Stefano Diciotti","doi":"10.1007/s10278-025-01536-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01536-0","url":null,"abstract":"<p><p>The potential of deep learning for medical imaging is often constrained by limited data availability. Generative models can unlock this potential by generating synthetic data that reproduces the statistical properties of real data while being more accessible for sharing. In this study, we investigated the influence of training set size on the performance of a state-of-the-art generative adversarial network, the StyleGAN2-ADA, trained on a cohort of 3,227 subjects from the OpenBHB dataset to generate 2D slices of brain MR images from healthy subjects. The quality of the synthetic images was assessed through qualitative evaluations and state-of-the-art quantitative metrics, which are provided in a publicly accessible repository. Our results demonstrate that StyleGAN2-ADA generates realistic and high-quality images, deceiving even expert radiologists while preserving privacy, as it did not memorize training images. Notably, increasing the training set size led to slight improvements in fidelity metrics. However, training set size had no noticeable impact on diversity metrics, highlighting the persistent limitation of mode collapse. Furthermore, we observed that diversity metrics, such as coverage and β-recall, are highly sensitive to the number of synthetic images used in their computation, leading to inflated values when synthetic data significantly outnumber real ones. These findings underscore the need to carefully interpret diversity metrics and the importance of employing complementary evaluation strategies for robust assessment. Overall, while StyleGAN2-ADA shows promise as a tool for generating privacy-preserving synthetic medical images, overcoming diversity limitations will require exploring alternative generative architectures or incorporating additional regularization techniques.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145133267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploiting Cross-modal Collaboration and Discrepancy for Semi-supervised Ischemic Stroke Lesion Segmentation from Multi-sequence MRI Images.","authors":"Yuanxin Cao, Tian Qin, Yang Liu","doi":"10.1007/s10278-025-01691-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01691-4","url":null,"abstract":"<p><p>Accurate ischemic stroke lesion segmentation is useful to define the optimal reperfusion treatment and unveil the stroke etiology. Despite the importance of diffusion-weighted MRI (DWI) for stroke diagnosis, learning from multi-sequence MRI images like apparent diffusion coefficient (ADC) can capitalize on the complementary nature of information from various modalities and show strong potential to improve the performance of segmentation. However, existing deep learning-based methods require large amounts of well-annotated data from multiple modalities for training, while acquiring such datasets is often impractical. We conduct the exploration of semi-supervised stroke lesion segmentation from multi-sequence MRI images by utilizing unlabeled data to improve performance using limited annotation and propose a novel framework by exploiting cross-modality collaboration and discrepancy to efficiently utilize unlabeled data. Specifically, we adopt a cross-modal bidirectional copy-paste strategy to enable information collaboration between different modalities and a cross-modal discrepancy-informed correction strategy to efficiently learn from limited labeled multi-sequence MRI data and abundant unlabeled data. Extensive experiments on the ischemic stroke lesion segmentation (ISLES 22) dataset demonstrate that our method efficiently utilizes unlabeled data with 12.32% DSC improvements compared with a supervised baseline using 10% annotations and outperforms existing semi-supervised segmentation methods with better performance.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparative Evaluation of Radiomics and Deep Learning Models for Disease Detection in Chest Radiography.","authors":"Zhijin He, Alan B McMillan","doi":"10.1007/s10278-025-01670-9","DOIUrl":"https://doi.org/10.1007/s10278-025-01670-9","url":null,"abstract":"<p><p>The application of artificial intelligence (AI) in medical imaging has revolutionized diagnostic practices, enabling advanced analysis and interpretation of radiological data. This study presents a comprehensive evaluation of radiomics-based and deep learning-based approaches for disease detection in chest radiography, focusing on COVID-19, lung opacity, and viral pneumonia. While deep learning models, particularly convolutional neural networks (CNNs) and vision transformers (ViTs), learn directly from image data, radiomics-based models extract handcrafted features, offering potential advantages in data-limited scenarios. We systematically compared the diagnostic performance of various AI models, including Decision Trees, Gradient Boosting, Random Forests, Support Vector Machines (SVMs), and Multi-Layer Perceptrons (MLPs) for radiomics, against state-of-the-art deep learning models such as InceptionV3, EfficientNetL, and ConvNeXtXLarge. Performance was evaluated across multiple sample sizes. At 24 samples, EfficientNetL achieved an AUC of 0.839, outperforming SVM (AUC = 0.762). At 4000 samples, InceptionV3 achieved the highest AUC of 0.996, compared to 0.885 for Random Forest. A Scheirer-Ray-Hare test confirmed significant main and interaction effects of model type and sample size on all metrics. Post hoc Mann-Whitney U tests with Bonferroni correction further revealed consistent performance advantages for deep learning models across most conditions. These findings provide statistically validated, data-driven recommendations for model selection in diagnostic AI. Deep learning models demonstrated higher performance and better scalability with increasing data availability, while radiomics-based models may remain useful in low-data contexts. This study addresses a critical gap in AI-based diagnostic research by offering practical guidance for deploying AI models across diverse clinical environments.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacqueline I Bereska, Selina Palic, Leonard F Bereska, Efstratios Gavves, C Yung Nio, Marnix P M Kop, Femke Struik, Freek Daams, Martijn A van Dam, Tom Dijkhuis, Marc G Besselink, Henk A Marquering, Jaap Stoker, Inez M Verpalen
{"title":"Refining the Classroom: The Self-Supervised Professor Model for Improved Segmentation of Locally Advanced Pancreatic Ductal Adenocarcinoma.","authors":"Jacqueline I Bereska, Selina Palic, Leonard F Bereska, Efstratios Gavves, C Yung Nio, Marnix P M Kop, Femke Struik, Freek Daams, Martijn A van Dam, Tom Dijkhuis, Marc G Besselink, Henk A Marquering, Jaap Stoker, Inez M Verpalen","doi":"10.1007/s10278-025-01555-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01555-x","url":null,"abstract":"<p><p>Pancreatic ductal adenocarcinoma (PDAC) is a leading cause of cancer-related deaths, with accurate staging being critical for treatment planning. Automated 3D segmentation models can aid in staging, but segmenting PDAC, especially in cases of locally advanced pancreatic cancer (LAPC), is challenging due to the tumor's heterogeneous appearance, irregular shapes, and extensive infiltration. This study developed and evaluated a tripartite self-supervised learning architecture for improved 3D segmentation of LAPC, addressing the challenges of heterogeneous appearance, irregular shapes, and extensive infiltration in PDAC. We implemented a tripartite architecture consisting of a teacher model, a professor model, and a student model. The teacher model, trained on manually segmented CT scans, generated initial pseudo-segmentations. The professor model refined these segmentations, which were then used to train the student model. We utilized 1115 CT scans from 903 patients for training. Three expert abdominal radiologists manually segmented 30 CT scans from 27 patients with LAPC, serving as reference standards. We evaluated the performance using DICE, Hausdorff distance (HD95), and mean surface distance (MSD). The teacher, professor, and student models achieved average DICE scores of 0.60, 0.73, and 0.75, respectively, with significant boundary accuracy improvements (teacher HD95/MSD, 25.71/5.96 mm; professor, 9.68/1.96 mm; student, 4.79/1.34 mm). Our findings demonstrate that the professor model significantly enhances segmentation accuracy for LAPC (p < 0.01). Both the professor and student models offer substantial improvements over previous work. The introduced tripartite self-supervised learning architecture shows promise for improving automated 3D segmentation of LAPC, potentially aiding in more accurate staging and treatment planning.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145133336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rajeev Nowrangi, Stacey M Elangovan, Brian D Coley, Eric J Crotty, Arnold C Merrow, Usha D Nagaraj, Sara M O'Hara, Susan N Smith, Paula Bennett, Alexander J Towbin
{"title":"A Comprehensive Guide to Selecting and (Potentially) Replacing PACS: Navigating the Decision-Making Processes.","authors":"Rajeev Nowrangi, Stacey M Elangovan, Brian D Coley, Eric J Crotty, Arnold C Merrow, Usha D Nagaraj, Sara M O'Hara, Susan N Smith, Paula Bennett, Alexander J Towbin","doi":"10.1007/s10278-025-01672-7","DOIUrl":"https://doi.org/10.1007/s10278-025-01672-7","url":null,"abstract":"<p><p>Selecting a Picture Archiving and Communication System (PACS) is a strategic decision that impacts radiology workflow, communication, and operational efficiency. Despite its importance, there is limited guidance on structured approaches to PACS procurement. This study aims to describe a comprehensive, data-driven approach to evaluating and selecting a clinical PACS within a radiology department. A six-phase process was developed: team formation, expectation setting, background assessment, initial vendor assessment, virtual demonstrations, and final evaluation. A multidisciplinary committee identified key pillars and concepts for PACS functionality, which informed vendor evaluations through surveys, standardized demonstrations, and a detailed request for proposal. Quantitative and qualitative data were used at each phase to score vendors across multiple dimensions including usability, integration, performance, and cost. Eleven pillars and 236 concepts were defined and weighted to evaluate vendor solutions. Five vendors were shortlisted after an initial presentation. These vendors were invited to provide a virtual demonstration. Three vendors were then selected for onsite assessments using department-generated anonymized datasets. Comprehensive RFPs and cost analyses were incorporated into final evaluations. Ultimately, the incumbent vendor was selected with a recommendation for reevaluation in 3 years, guided by detailed assessment metrics and stakeholder feedback. This case study offers a potentially reproducible methodology for healthcare institutions evaluating PACS solutions. Emphasizing transparency, stakeholder engagement, and data-driven decision-making, the approach should be adaptable to other technology procurement efforts and scalable to smaller projects.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145093380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}