Qinhua Li, Dan Liu, Kunyao Li, Jing Li, Yongxia Zhou
{"title":"Artificial Intelligence Iterative Reconstruction Algorithm Combined with Low-Dose Aortic CTA for Preoperative Access Assessment of Transcatheter Aortic Valve Implantation: A Prospective Cohort Study.","authors":"Qinhua Li, Dan Liu, Kunyao Li, Jing Li, Yongxia Zhou","doi":"10.1007/s10278-025-01622-3","DOIUrl":"https://doi.org/10.1007/s10278-025-01622-3","url":null,"abstract":"<p><p>This study aimed to explore whether an artificial intelligence iterative reconstruction (AIIR) algorithm combined with low-dose aortic computed tomography angiography (CTA) demonstrates clinical effectiveness in assessing preoperative access for transcatheter aortic valve implantation (TAVI). A total of 109 patients were prospectively recruited for aortic CTA scans and divided into two groups: group A (n = 51) with standard-dose CT examinations (SDCT) and group B (n = 58) with low-dose CT examinations (LDCT). Group B was further subdivided into groups B1 and B2. Groups A and B2 used the hybrid iterative algorithm (HIR: Karl 3D), whereas Group B1 used the AIIR algorithm. CT attenuation and noise of different vessel segments were measured, and the contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) were calculated. Two radiologists, who were blinded to the study details, rated the subjective image quality on a 5-point scale. The effective radiation doses were also recorded for groups A and B. Group B1 demonstrated the highest CT attenuation, SNR, and CNR and the lowest image noise among the three groups (p < 0.05). The scores of subjective image noise, vessel and non-calcified plaque edge sharpness, and overall image quality in Group B1 were higher than those in groups A and B2 (p < 0.001). Group B2 had the highest artifacts scores compared with groups A and B1 (p < 0.05). The radiation dose in group B was reduced by 50.33% compared with that in group A (p < 0.001). The AIIR algorithm combined with low-dose CTA yielded better diagnostic images before TAVI than the Karl 3D algorithm.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144791257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kanggil Park, Ji Young Lee, Ahin Choi, Jeong-Sik Byeon, Namkug Kim
{"title":"Evaluation of Net Withdrawal Time and Colonoscopy Video Summarization Using Deep Learning Based Automated Temporal Video Segmentation.","authors":"Kanggil Park, Ji Young Lee, Ahin Choi, Jeong-Sik Byeon, Namkug Kim","doi":"10.1007/s10278-025-01632-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01632-1","url":null,"abstract":"<p><p>Adequate withdrawal time is crucial in colonoscopy, as it is directly associated with polyp detection rates. However, traditional withdrawal time measurements can be biased by non-observation activities, leading to inaccurate assessments of procedural quality. This study aimed to develop a deep learning (DL) model that accurately measures net withdrawal time by excluding non-observation phases and generates quantitative visual summaries of key procedural events. We developed a DL-based automated temporal video segmentation model trained on 40 full-length colonoscopy videos and 825 cecum clips extracted from 221 colonoscopy procedures. The model classifies four key events: cecum, intervention, outside, and narrow-band imaging (NBI) mode. Using the temporal video segmentation results, we calculated the net withdrawal time and extracted representative images from each segment for video summarization. Model performance was evaluated using four standard temporal video segmentation metrics, and its correlation with endoscopist-recorded times on both internal and external test datasets. In both internal and external tests, the DL model achieved a total F1 score exceeding 93% for temporal video segmentation performance. The net withdrawal time showed a strong correlation with endoscopist-recorded times (internal dataset, r = 0.984, p < 0.000; external dataset, r = 0.971, p < 0.000). Additionally, the model successfully generated representative images, and the endoscopists' visual assessment confirmed that these images provided accurate summaries of key events. Compared to manual review, the proposed model offers a more efficient, standardized and objective approach to assessing procedural quality. This model has the potential to enhance clinical practice and improve quality assurance in colonoscopy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144786266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Larissa Ferreira Rodriges Moreira, André Ricardo Backes
{"title":"Ensemble of Handcrafted and Learned Features for Colorectal Cancer Classification.","authors":"Larissa Ferreira Rodriges Moreira, André Ricardo Backes","doi":"10.1007/s10278-025-01633-0","DOIUrl":"10.1007/s10278-025-01633-0","url":null,"abstract":"<p><p>Colorectal cancer (CRC) remains one of the most common and lethal malignancies worldwide. The current gold standard for CRC diagnosis relies on histopathological analysis, a time-consuming process subject to inter-observer variability and dependent on expert experience. While convolutional neural networks (CNNs) have achieved remarkable success in medical image analysis, they often require large annotated datasets and lack interpretability. Traditional handcrafted texture descriptors, on the other hand, provide domain-specific insights but may fall short in capturing complex patterns. To address these limitations, we propose a novel ensemble approach that integrates handcrafted texture descriptors with deep learning-based features extracted from CNNs. Our method leverages the complementary strengths of both feature types, resulting in a more robust and discriminative feature space. Experimental evaluations demonstrate that our ensemble approach outperforms state-of-the-art methods across various metrics, achieving an accuracy of 99.20% by combining color textures with deep learning features. This study underscores the potential of integrating traditional and modern techniques to advance medical image analysis, presenting a significant step forward in automated CRC classification and fostering advancements in medical computing and image processing.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144786265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Dual-Output Deep Learning Model Based on InceptionV3 for Radiographic Bone Age and Gender Assessment.","authors":"Baraa Rayed, Hakan Amasya, Mana Sezdi","doi":"10.1007/s10278-025-01623-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01623-2","url":null,"abstract":"<p><p>Hand-wrist radiographs are used in bone age prediction. Computer-assisted clinical decision support systems offer solutions to the limitations of the radiographic bone age assessment methods. In this study, a multi-output prediction model was designed to predict bone age and gender using digital hand-wrist radiographs. The InceptionV3 architecture was used as the backbone, and the model was trained and tested using the open-access dataset of 2017 RSNA Pediatric Bone Age Challenge. A total of 14,048 samples were divided to training, validation, and testing subsets with the ratio of 7:2:1, and additional specialized convolutional neural network layers were implemented for robust feature management, such as Squeeze-and-Excitation block. The proposed model achieved a mean squared error of approximately 25 and a mean absolute error of 3.1 for predicting bone age. In gender classification, an accuracy of 95% and an area under the curve of 97% were achieved. The intra-class correlation coefficient for the continuous bone age predictions was found to be 0.997, while the Cohen's <math><mi>κ</mi></math> coefficient for the gender predictions was found to be 0.898 ( <math><mrow><mi>p</mi> <mo><</mo></mrow> </math> 0.001). The proposed model aims to increase model efficiency by identifying common and discrete features. Based on the results, the proposed algorithm is promising; however, the mid-high-end hardware requirement may be a limitation for its use on local machines in the clinic. The future studies may consider increasing the dataset and simplification of the algorithms.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144777556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"In Silico Digital Breast Tomosynthesis Dataset for the Comparative Analysis of Deep Learning Models in Tumor Segmentation.","authors":"Cristina Alfaro Vergara, Nicolás Araya Caro, Domingo Mery Quiroz, Claudia Prieto Vasquez","doi":"10.1007/s10278-025-01626-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01626-z","url":null,"abstract":"<p><p>The scarcity of publicly available digital breast tomosynthesis (DBT) datasets significantly limits the development of robust deep learning (DL) models for breast tumor segmentation. In this exploratory proof-of-concept study, we assess the viability of in silico-generated DBT data as a training source for tumor segmentation. A dataset of 230 two-dimensional (2D) regions of interest (ROIs) derived from FDA-cleared software and encompassing a spectrum of breast densities and tumor complexities, was used to train 13 DL models, including U-Net, FCN, DeepLabv3, and DeepLabv3 + architectures. Each model was trained either from scratch or fine-tuned using COCO-pretrained weights (ResNet50/101 backbones). Performance was evaluated using F1-score, intersection over union (IoU), precision, and recall. Among all models, U-Net trained from scratch and DeepLabv3 + fine-tuned with ResNet50 achieved the highest and most consistent results (F1-scores of 82.52% and 84.98%, and per-image IoUs of 78.49% and 83.77%, respectively). No statistically significant differences were found using the Wilcoxon signed-rank test and post hoc Bonferroni correction (α > 0.0042). To evaluate generalization across domains, the baseline U-Net model was retrained from scratch on a hybrid dataset combining in silico and real-world DBT ROIs, yielding promising results (F1-score of 79%). Despite the domain shift, these findings support the utility of in silico DBT as a complementary resource for training and benchmarking DL models, particularly in data-limited environments. This study provides foundational experimental evidence for integrating computationally generated in silico data into AI-based DBT tumor segmentation research workflows.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144786267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Ritsche, Romina Ledergerber, Michele Pansini, Francesco Santini, Oliver Faude
{"title":"Assessment of Lower Limb Muscle Volume Using 3D Ultrasonography: Validity and Reliability Compared to MRI.","authors":"Paul Ritsche, Romina Ledergerber, Michele Pansini, Francesco Santini, Oliver Faude","doi":"10.1007/s10278-025-01624-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01624-1","url":null,"abstract":"<p><strong>Introduction: </strong>Muscle volume is a key indicator of strength and neuromuscular health, commonly assessed via Magnetic Resonance Imaging (MRI). While accurate, MRI is expensive and time-intensive. Three-dimensional ultrasonography (3DUS) offers a more accessible alternative but requires validation due to its setup-dependent accuracy. This study investigated the validity and reliability of a custom 3DUS setup for measuring lower limb muscle volumes.</p><p><strong>Methods: </strong>Fifteen participants (8 female; 18-40 years) underwent two 3DUS and one MRI sessions. The tibialis anterior, vastus lateralis, gastrocnemii, and biceps femoris muscles were scanned using ultrasonography integrated with a motion capture system. Phantom models were also scanned. After ten participants, the scanning protocol was adapted. 3DUS and MRI volumes were analyzed using 3D Slicer by two raters or one rater, respectively. Reliability was assessed using intra-class correlation (ICC), coefficient of variation (CV%), standard error of measurement (SEM), and minimal detectable change (MDC).</p><p><strong>Results: </strong>3DUS showed excellent test-retest and inter-rater reliability (ICC = 0.97-0.99; CV% = 2.0-4.6%). MDC values were < 5 mL for all muscles. However, 3DUS systematically underestimated volumes compared to MRI (biases: -10.0 to 33.0%), with best agreement for tibialis anterior and lowest for gastrocnemii. After adapting the protocol, mean differences were reduced by ~ 70%. Phantom scans confirmed both modalities were accurate, suggesting in vivo errors arose from probe pressure and sweep inconsistencies.</p><p><strong>Conclusion: </strong>3DUS demonstrated excellent reliability but underestimated volumes relative to MRI, influenced by muscle shape and location. Despite limitations, it is a promising, cost-effective method for tracking longitudinal muscle changes. Open methodology supports broader application.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144786262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F Spoto, R De Robertis, N Cardobi, A Garofano, L Messineo, E Lucin, M Milella, M D'Onofrio
{"title":"Can Machine Learning Predict Metastatic Sites in Pancreatic Ductal Adenocarcinoma? A Radiomic Analysis.","authors":"F Spoto, R De Robertis, N Cardobi, A Garofano, L Messineo, E Lucin, M Milella, M D'Onofrio","doi":"10.1007/s10278-025-01627-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01627-y","url":null,"abstract":"<p><p>Pancreatic ductal adenocarcinoma (PDAC) exhibits high metastatic potential, with distinct prognoses based on metastatic sites. Radiomics enables quantitative imaging analysis for predictive modeling. To evaluate the feasibility of radiomic models in predicting PDAC metastatic patterns, specifically distinguishing between hepatic and pulmonary metastases. This retrospective study included 115 PDAC patients with either liver (n = 94) or lung (n = 21) metastases. Radiomic features were extracted from pancreatic arterial and venous phase CT scans of primary tumors using PyRadiomics. Two radiologists independently segmented tumors for inter-reader reliability assessment. Features with ICC > 0.9 underwent LASSO regularization for feature selection. Class imbalance was addressed using SMOTE and class weighting. Model performance was evaluated using fivefold cross-validation and bootstrap resampling. The multivariate logistic regression model achieved an AUC-ROC of 0.831 (95% CI: 0.752-0.910). At the optimal threshold, sensitivity was 0.762 (95% CI: 0.659-0.865) and specificity was 0.787 (95% CI: 0.695-0.879). The negative predictive value for lung metastases was 0.810 (95% CI: 0.734-0.886). LargeDependenceEmphasis showed a trend toward significance (p = 0.0566) as a discriminative feature. Precision was 0.842, recall 0.762, and F1 score 0.800. Radiomic analysis of primary pancreatic tumors demonstrates potential for predicting hepatic versus pulmonary metastatic patterns. The high negative predictive value for lung metastases may support clinical decision-making. External validation is essential before clinical implementation. These findings from a single-center study require confirmation in larger, multicenter cohorts.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144786263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wentao Jin, Hao Zhang, Yan Ning, Xiaojun Chen, Guofu Zhang, Haiming Li, He Zhang
{"title":"Development and Validation of an Explainable MRI-Based Habitat Radiomics Model for Predicting p53-Abnormal Endometrial Cancer: A Multicentre Feasibility Study.","authors":"Wentao Jin, Hao Zhang, Yan Ning, Xiaojun Chen, Guofu Zhang, Haiming Li, He Zhang","doi":"10.1007/s10278-025-01631-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01631-2","url":null,"abstract":"<p><p>We developed an MRI-based habitat radiomics model (HRM) to predict p53-abnormal (p53abn) molecular subtypes of endometrial cancer (EC). Patients with pathologically confirmed EC were retrospectively enrolled from three hospitals and categorized into a training cohort (n = 270), test cohort 1 (n = 70), and test cohort 2 (n = 154). The tumour was divided into habitat sub-regions using diffusion-weighted imaging (DWI) and contrast-enhanced (CE) images with the K-means algorithm. Radiomics features were extracted from T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), DWI, and CE images. Three machine learning classifiers-logistic regression, support vector machines, and random forests-were applied to develop predictive models for p53abn EC. Model performance was validated using receiver operating characteristic (ROC) curves, and the model with the best predictive performance was selected as the HRM. A whole-region radiomics model (WRM) was also constructed, and a clinical model (CM) with five clinical features was developed. The SHApley Additive ExPlanations (SHAP) method was used to explain the outputs of the models. DeLong's test evaluated and compared the performance across the cohorts. A total of 1920 habitat radiomics features were considered. Eight features were selected for the HRM, ten for the WRM, and three clinical features for the CM. The HRM achieved the highest AUC: 0.855 (training), 0.769 (test1), and 0.766 (test2). The AUCs of the WRM were 0.707 (training), 0.703 (test1), and 0.738 (test2). The AUCs of the CM were 0.709 (training), 0.641 (test1), and 0.665 (test2). The MRI-based HRM successfully predicted p53abn EC. The results indicate that habitat combined with machine learning, radiomics, and SHAP can effectively predict p53abn EC, providing clinicians with intuitive insights and interpretability regarding the impact of risk factors in the model.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144777557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sara Sadat Aghamiri, Rada Amin, Pouria Isavand, Sanaz Vahdati, Atefeh Zeinoddini, Felipe C Kitamura, Linda Moy, Timothy Kline
{"title":"Digital Twin Technology In Radiology.","authors":"Sara Sadat Aghamiri, Rada Amin, Pouria Isavand, Sanaz Vahdati, Atefeh Zeinoddini, Felipe C Kitamura, Linda Moy, Timothy Kline","doi":"10.1007/s10278-025-01597-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01597-1","url":null,"abstract":"<p><p>A digital twin is a computational model that provides a virtual representation of a specific physical object, system, or process and predicts its behavior at future time points. These simulation models form computational profiles for new diagnosis and prevention models. The digital twin is a concept borrowed from engineering. However, the rapid evolution of this technology has extended its application across various industries. In recent years, digital twins in healthcare have gained significant traction due to their potential to revolutionize medicine and drug development. In the context of radiology, digital twin technology can be applied in various areas, including optimizing medical device design, improving system performance, facilitating personalized medicine, conducting virtual clinical trials, and educating radiology trainees. Also, radiologic image data is a critical source of patient-specific measures that play a role in generating advanced intelligent digital twins. Generating a practical digital twin faces several challenges, including data availability, computational techniques, validation frameworks, and uncertainty quantification, all of which require collaboration among engineers, healthcare providers, and stakeholders. This review focuses on recent trends in digital twin technology and its intersection with radiology by reviewing applications, technological advancements, and challenges that need to be addressed for successful implementation in the field.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144786264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Po-Hao Chen, Benoit Desjardins, Brett Strassner, Reza Forghani, Robert Bodak, Judy Gichoya, James Whitfill, Eric C Ehman, Christoph Wald
{"title":"Protecting Radiology Data and Devices Against Cybersecurity Threats: A Joint White Paper of the American College of Radiology (ACR) and Society for Imaging Informatics in Medicine (SIIM).","authors":"Po-Hao Chen, Benoit Desjardins, Brett Strassner, Reza Forghani, Robert Bodak, Judy Gichoya, James Whitfill, Eric C Ehman, Christoph Wald","doi":"10.1007/s10278-025-01621-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01621-4","url":null,"abstract":"<p><p>Modern radiology relies entirely on digital imaging systems. Healthcare records draw cybercriminals' attention. On-site and cloud-based informatics systems, including artificial intelligence, can boost efficiency but also create weak points. Medical imaging systems must safeguard patient identities, control access, secure devices, and coordinate with patients, payers, and billing partners. Endorsed by the American College of Radiology and the Society for Imaging Informatics in Medicine, this white paper outlines steps to fortify the medical imaging pipeline methodically, train caregivers on the frontline, and develop incident response and recovery strategies for data breaches and disabling ransomware attacks. Additionally, building a culture of safety in imaging involves conducting routine simulations and actively engaging leadership. Taken together, these multidisciplinary interventions can minimize the clinical impact and operational implications of cybersecurity events, thereby maintaining patient trust in an increasingly interconnected imaging workflow.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144786268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}