Kevin Junck, Jordan D Perchik, Matthew Larrison, Adam Yates, Stephen Durham, Vamsi Penmetsa, Srini Tridandapani
{"title":"Integrating Global Health Initiatives into Routine Radiology Workflow in the USA.","authors":"Kevin Junck, Jordan D Perchik, Matthew Larrison, Adam Yates, Stephen Durham, Vamsi Penmetsa, Srini Tridandapani","doi":"10.1007/s10278-024-01356-8","DOIUrl":"10.1007/s10278-024-01356-8","url":null,"abstract":"<p><p>Radiologist shortages and lack of access to radiology services are common issues in low- and middle-income countries around the world. Teleradiology offers radiologists an opportunity to contribute to global health and support hospital systems in low-resource regions remotely. Challenges can occur when determining how to integrate the new remote worklist, how radiologists will view and report exams, and how a US host site can ensure safety and privacy across the different systems. In this manuscript, we describe our experience integrating exams performed at a remote hospital system in Ethiopia into a routine radiology worklist in the USA.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2580-2584"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343389/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142804161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farhad Maleki, Linda Moy, Reza Forghani, Tapotosh Ghosh, Katie Ovens, Steve Langer, Pouria Rouzrokh, Bardia Khosravi, Ali Ganjizadeh, Daniel Warren, Roxana Daneshjou, Mana Moassefi, Atlas Haddadi Avval, Susan Sotardi, Neil Tenenholtz, Felipe Kitamura, Timothy Kline
{"title":"RIDGE: Reproducibility, Integrity, Dependability, Generalizability, and Efficiency Assessment of Medical Image Segmentation Models.","authors":"Farhad Maleki, Linda Moy, Reza Forghani, Tapotosh Ghosh, Katie Ovens, Steve Langer, Pouria Rouzrokh, Bardia Khosravi, Ali Ganjizadeh, Daniel Warren, Roxana Daneshjou, Mana Moassefi, Atlas Haddadi Avval, Susan Sotardi, Neil Tenenholtz, Felipe Kitamura, Timothy Kline","doi":"10.1007/s10278-024-01282-9","DOIUrl":"10.1007/s10278-024-01282-9","url":null,"abstract":"<p><p>Deep learning techniques hold immense promise for advancing medical image analysis, particularly in tasks like image segmentation, where precise annotation of regions or volumes of interest within medical images is crucial but manually laborious and prone to interobserver and intraobserver biases. As such, deep learning approaches could provide automated solutions for such applications. However, the potential of these techniques is often undermined by challenges in reproducibility and generalizability, which are key barriers to their clinical adoption. This paper introduces the RIDGE checklist, a comprehensive framework designed to assess the Reproducibility, Integrity, Dependability, Generalizability, and Efficiency of deep learning-based medical image segmentation models. The RIDGE checklist is not just a tool for evaluation but also a guideline for researchers striving to improve the quality and transparency of their work. By adhering to the principles outlined in the RIDGE checklist, researchers can ensure that their developed segmentation models are robust, scientifically valid, and applicable in a clinical setting.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2524-2536"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343378/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Classification of Focal Liver Lesions Based on Multi-Sequence MRI.","authors":"Mingfang Hu, Shuxin Wang, Mingjie Wu, Ting Zhuang, Xiaoqing Liu, Yuqin Zhang","doi":"10.1007/s10278-024-01326-0","DOIUrl":"10.1007/s10278-024-01326-0","url":null,"abstract":"<p><p>Accurate and automated diagnosis of focal liver lesions is critical for effective radiological practice and patient treatment planning. This study presents a deep learning model specifically developed for classifying focal liver lesions across eight different MRI sequences, categorizing them into seven distinct classes. The model includes a feature extraction module that derives multi-level representations of the lesions, a feature fusion attention module to integrate contextual information from the various sequences, and an attention-guided data augmentation module to enrich the training dataset. The proposed model achieved a patient-wise classification accuracy of 0.9302 and a lesion-wise accuracy of 0.8592, along with an F1-score of 0.8395, a recall of 0.8296, and a precision of 0.8551. These findings demonstrate the effectiveness of combining multi-sequence MRI with advanced deep learning methodologies, providing a robust tool to support radiologists in accurately classifying liver lesions in clinical settings.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1986-1998"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cesar A Sierra-Franco, Jan Hurtado, Victor de A Thomaz, Leonardo C da Cruz, Santiago V Silva, Greis Francy M Silva-Calpa, Alberto Raposo
{"title":"Towards Automated Semantic Segmentation in Mammography Images for Enhanced Clinical Applications.","authors":"Cesar A Sierra-Franco, Jan Hurtado, Victor de A Thomaz, Leonardo C da Cruz, Santiago V Silva, Greis Francy M Silva-Calpa, Alberto Raposo","doi":"10.1007/s10278-024-01364-8","DOIUrl":"10.1007/s10278-024-01364-8","url":null,"abstract":"<p><p>Mammography images are widely used to detect non-palpable breast lesions or nodules, aiding in cancer prevention and enabling timely intervention when necessary. To support medical analysis, computer-aided detection systems can automate the segmentation of landmark structures, which is helpful in locating abnormalities and evaluating image acquisition adequacy. This paper presents a deep learning-based framework for segmenting the nipple, the pectoral muscle, the fibroglandular tissue, and the fatty tissue in standard-view mammography images. To the best of our knowledge, we introduce the largest dataset dedicated to mammography segmentation of key anatomical structures, specifically designed to train deep learning models for this task. Through comprehensive experiments, we evaluated various deep learning model architectures and training configurations, demonstrating robust segmentation performance across diverse and challenging cases. These results underscore the framework's potential for clinical integration. In our experiments, four semantic segmentation architectures were compared, all showing suitability for the target problem, thereby offering flexibility in model selection. Beyond segmentation, we introduce a suite of applications derived from this framework to assist in clinical assessments. These include automating tasks such as multi-view lesion registration and anatomical position estimation, evaluating image acquisition quality, measuring breast density, and enhancing visualization of breast tissues, thus addressing critical needs in breast cancer screening and diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2260-2280"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343431/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cone Beam Computed Tomography Image-Quality Improvement Using \"One-Shot\" Super-resolution.","authors":"Takumasa Tsuji, Soichiro Yoshida, Mitsuki Hommyo, Asuka Oyama, Shinobu Kumagai, Kenshiro Shiraishi, Jun'ichi Kotoku","doi":"10.1007/s10278-024-01346-w","DOIUrl":"10.1007/s10278-024-01346-w","url":null,"abstract":"<p><p>Cone beam computed tomography (CBCT) images are convenient representations for obtaining information about patients' internal organs, but their lower image quality than those of treatment planning CT images constitutes an important shortcoming. Several proposed CBCT image-quality improvement methods based on deep learning require large amounts of training data. Our newly developed model using a super-resolution method, \"one-shot\" super-resolution (OSSR) based on the \"zero-shot\" super-resolution method, requires only small amounts of training data to improve CBCT image quality using only the target CBCT image and the paired treatment planning CT image. For this study, pelvic CBCT images and treatment planning CT images of 30 prostate cancer patients were used. We calculated the root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) to evaluate image-quality improvement and normalized mutual information (NMI) as a quantitative evaluation of positional accuracy. Our proposed method can improve CBCT image quality without requiring large amounts of training data. After applying our proposed method, the resulting RMSE, PSNR, SSIM, and NMI between the CBCT images and the treatment planning CT images were as much as 0.86, 1.05, 1.03, and 1.31 times better than those obtained without using our proposed method. By comparison, CycleGAN exhibited values of 0.91, 1.03, 1.02, and 1.16. The proposed method achieved performance equivalent to that of CycleGAN, which requires images from approximately 30 patients for training. Findings demonstrated improvement of CBCT image quality using only the target CBCT images and the paired treatment planning CT images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2120-2133"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12344046/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142782336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Volumetric Integrated Classification Index: An Integrated Voxel-Based Morphometry and Machine Learning Interpretable Biomarker for Post-Traumatic Stress Disorder.","authors":"Yulong Jia, Beining Yang, Haotian Xin, Qunya Qi, Yu Wang, Liyuan Lin, Yingying Xie, Chaoyang Huang, Jie Lu, Wen Qin, Nan Chen","doi":"10.1007/s10278-024-01313-5","DOIUrl":"10.1007/s10278-024-01313-5","url":null,"abstract":"<p><p>PTSD is a complex mental health condition triggered by individuals' traumatic experiences, with long-term and broad impacts on sufferers' psychological health and quality of life. Despite decades of research providing partial understanding of the pathobiological aspects of PTSD, precise neurobiological markers and imaging indicators remain challenging to pinpoint. This study employed VBM analysis and machine learning algorithms to investigate structural brain changes in PTSD patients. Data were sourced ADNI-DoD database for PTSD cases and from the ADNI database for healthy controls. Various machine learning models, including SVM, RF, and LR, were utilized for classification. Additionally, the VICI was proposed to enhance model interpretability, incorporating SHAP analysis. The association between PTSD risk genes and VICI values was also explored through gene expression data analysis. Among the tested machine learning algorithms, RF emerged as the top performer, achieving high accuracy in classifying PTSD patients. Structural brain abnormalities in PTSD patients were predominantly observed in prefrontal areas compared to healthy controls. The proposed VICI demonstrated classification efficacy comparable to the optimized RF model, indicating its potential as a simplified diagnostic tool. Analysis of gene expression data revealed significant associations between PTSD risk genes and VICI values, implicating synaptic integrity and neural development regulation. This study reveals neuroimaging and genetic characteristics of PTSD, highlighting the potential of VBM analysis and machine learning models in diagnosis and prognosis. The VICI offers a promising approach to enhance model interpretability and guide clinical decision-making. These findings contribute to a better understanding of the pathophysiological mechanisms of PTSD and provide new avenues for future diagnosis and treatment.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1924-1934"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faisal Alshanketi, Abdulrahman Alharbi, Mathew Kuruvilla, Vahid Mahzoon, Shams Tabrez Siddiqui, Nadim Rana, Ali Tahir
{"title":"Pneumonia Detection from Chest X-Ray Images Using Deep Learning and Transfer Learning for Imbalanced Datasets.","authors":"Faisal Alshanketi, Abdulrahman Alharbi, Mathew Kuruvilla, Vahid Mahzoon, Shams Tabrez Siddiqui, Nadim Rana, Ali Tahir","doi":"10.1007/s10278-024-01334-0","DOIUrl":"10.1007/s10278-024-01334-0","url":null,"abstract":"<p><p>Pneumonia remains a significant global health challenge, necessitating timely and accurate diagnosis for effective treatment. In recent years, deep learning techniques have emerged as powerful tools for automating pneumonia detection from chest X-ray images. This paper provides a comprehensive investigation into the application of deep learning for pneumonia detection, with an emphasis on overcoming the challenges posed by imbalanced datasets. The study evaluates the performance of various deep learning architectures, including visual geometry group (VGG), residual networks (ResNet), and Vision Transformers (ViT) along with strategies to mitigate the impact of imbalanced dataset, on publicly available datasets such as the Chest X-Ray Images (Pneumonia) dataset, BRAX dataset, and CheXpert dataset. Additionally, transfer learning from pre-trained models, such as ImageNet, is investigated to leverage prior knowledge for improved performance on pneumonia detection tasks. Our investigation extends to zero-shot and few-shot learning experiments on different geographical regions. The study also explores semi-supervised learning methods, including the Mean Teacher algorithm, to utilize unlabeled data effectively. Experimental results demonstrate the efficacy of transfer learning, data augmentation, and balanced weight in addressing imbalanced datasets, leading to improved accuracy and performance in pneumonia detection. Our findings emphasize the importance of selecting appropriate strategies based on dataset characteristics, with semi-supervised learning showing particular promise in leveraging unlabeled data. The findings highlight the potential of deep learning techniques in revolutionizing pneumonia diagnosis and treatment, paving the way for more efficient and accurate clinical workflows in the future.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2021-2040"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12344030/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Silbergleit, Adrienn Tóth, Jordan H Chamberlin, Mohamed Hamouda, Dhiraj Baruah, Sydney Derrick, U Joseph Schoepf, Jeremy R Burt, Ismail M Kabakus
{"title":"ChatGPT vs Gemini: Comparative Accuracy and Efficiency in CAD-RADS Score Assignment from Radiology Reports.","authors":"Matthew Silbergleit, Adrienn Tóth, Jordan H Chamberlin, Mohamed Hamouda, Dhiraj Baruah, Sydney Derrick, U Joseph Schoepf, Jeremy R Burt, Ismail M Kabakus","doi":"10.1007/s10278-024-01328-y","DOIUrl":"10.1007/s10278-024-01328-y","url":null,"abstract":"<p><p>This study aimed to evaluate the accuracy and efficiency of ChatGPT-3.5, ChatGPT-4o, Google Gemini, and Google Gemini Advanced in generating CAD-RADS scores based on radiology reports. This retrospective study analyzed 100 consecutive coronary computed tomography angiography reports performed between March 15, 2024, and April 1, 2024, at a single tertiary center. Each report containing a radiologist-assigned CAD-RADS score was processed using four large language models (LLMs) without fine-tuning. The findings section of each report was input into the LLMs, and the models were tasked with generating CAD-RADS scores. The accuracy of LLM-generated scores was compared to the radiologist's score. Additionally, the time taken by each model to complete the task was recorded. Statistical analyses included Mann-Whitney U test and interobserver agreement using unweighted Cohen's Kappa and Krippendorff's Alpha. ChatGPT-4o demonstrated the highest accuracy, correctly assigning CAD-RADS scores in 87% of cases (κ = 0.838, α = 0.886), followed by Gemini Advanced with 82.6% accuracy (κ = 0.784, α = 0.897). ChatGPT-3.5, although the fastest (median time = 5 s), was the least accurate (50.5% accuracy, κ = 0.401, α = 0.787). Gemini exhibited a higher failure rate (12%) compared to the other models, with Gemini Advanced slightly improving upon its predecessor. ChatGPT-4o outperformed other LLMs in both accuracy and agreement with radiologist-assigned CAD-RADS scores, though ChatGPT-3.5 was significantly faster. Despite their potential, current publicly available LLMs require further refinement before being deployed for clinical decision-making in CAD-RADS scoring.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2303-2311"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343400/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Impact of Artificial Intelligence on Radiologists' Reading Time in Bone Age Radiograph Assessment: A Preliminary Retrospective Observational Study.","authors":"Sejin Jeong, Kyunghwa Han, Yaeseul Kang, Eun-Kyung Kim, Kyungchul Song, Shreyas Vasanawala, Hyun Joo Shin","doi":"10.1007/s10278-024-01323-3","DOIUrl":"10.1007/s10278-024-01323-3","url":null,"abstract":"<p><p>To evaluate the real-world impact of artificial intelligence (AI) on radiologists' reading time during bone age (BA) radiograph assessments. Patients (<19 year-old) who underwent left-hand BA radiographs between December 2021 and October 2023 were retrospectively included. A commercial AI software was installed from October 2022. Radiologists' reading times, automatically recorded in the PACS log, were compared between the AI-unaided and AI-aided periods using linear regression tests and factors affecting reading time were identified. A total of 3643 radiographs (M:F=1295:2348, mean age 9.12 ± 2.31 years) were included and read by three radiologists, with 2937 radiographs (80.6%) in the AI-aided period. Overall reading times were significantly shorter in the AI-aided period compared to the AI-unaided period (mean 17.2 ± 12.9 seconds vs. mean 22.3 ± 14.7 seconds, p < 0.001). Staff reading times significantly decreased in the AI-aided period (mean 15.9 ± 11.4 seconds vs. mean 19.9 ± 13.4 seconds, p < 0.001), while resident reading times increased (mean 38.3 ± 16.4 seconds vs. 33.6 ± 15.3 seconds, p = 0.013). The use of AI and years of experience in radiology were significant factors affecting reading time (all, p≤0.001). The degree of decrease in reading time as experience increased was larger when utilizing AI (-1.151 for AI-unaided, -1.866 for AI-aided, difference =-0.715, p<0.001). In terms of AI exposure time, the staff's reading time decreased by 0.62 seconds per month (standard error 0.07, p<0.001) during the AI-aided period. The reading time of radiologists for BA assessment was influenced by AI. The time-saving effect of utilizing AI became more pronounced as the radiologists' experience and AI exposure time increased.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1915-1923"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12344022/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanxi Zhang, Xiwen Deng, Tingting Li, Yuan Li, Xiaohui Wang, Man Lu, Lifeng Yang
{"title":"A Neural Network for Segmenting Tumours in Ultrasound Rectal Images.","authors":"Yuanxi Zhang, Xiwen Deng, Tingting Li, Yuan Li, Xiaohui Wang, Man Lu, Lifeng Yang","doi":"10.1007/s10278-024-01358-6","DOIUrl":"10.1007/s10278-024-01358-6","url":null,"abstract":"<p><p>Ultrasound imaging is the most cost-effective approach for the early detection of rectal cancer, which is a high-risk cancer. Our goal was to design an effective method that can accurately identify and segment rectal tumours in ultrasound images, thereby facilitating rectal cancer diagnoses for physicians. This would allow physicians to devote more time to determining whether the tumour is benign or malignant and whether it has metastasized rather than merely confirming its presence. Data originated from the Sichuan Province Cancer Hospital. The test, training, and validation sets were composed of 53 patients with 173 images, 195 patients with 1247 images, and 20 patients with 87 images, respectively. We created a deep learning network architecture consisting of encoders and decoders. To enhance global information capture, we substituted traditional convolutional decoders with global attention decoders and incorporated effective channel information fusion for multiscale information integration. The Dice coefficient (DSC) of the proposed model was 75.49%, which was 4.03% greater than that of the benchmark model, and the Hausdorff distance 95(HD95) was 24.75, which was 8.43 lower than that of the benchmark model. The paired t-test statistically confirmed the significance of the difference between our model and the benchmark model, with a p-value less than 0.05. The proposed method effectively identifies and segments rectal tumours of diverse shapes. Furthermore, it distinguishes between normal rectal images and those containing tumours. Therefore, after consultation with physicians, we believe that our method can effectively assist physicians in diagnosing rectal tumours via ultrasound.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2229-2240"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12344021/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}