Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Evaluation of an Automatic Cephalometric Superimposition Method Based on Feature Matching. 一种基于特征匹配的自动头部测量叠加方法的评价。
Journal of imaging informatics in medicine Pub Date : 2025-02-25 DOI: 10.1007/s10278-025-01447-0
Ling Zhao, Juneng Huang, Min Tang, Xuejun Zhang, Lijuan Xiao, Renchuan Tao
{"title":"Evaluation of an Automatic Cephalometric Superimposition Method Based on Feature Matching.","authors":"Ling Zhao, Juneng Huang, Min Tang, Xuejun Zhang, Lijuan Xiao, Renchuan Tao","doi":"10.1007/s10278-025-01447-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01447-0","url":null,"abstract":"<p><p>The objective of the study is to establish a novel method for automatic cephalometric superimposition on the basis of feature matching and compare it with the commonly used Sella-Nasion (SN) superimposition method. A total of 178 pairs of pre- (T1) and post-treatment (T2) lateral cephalometric radiographs (LCRs) from adult orthodontic patients were collected. Ninety LCR pairs were used to train the you only look once version 8 (YOLOv8) model to automatically recognize stable cranial reference areas. This approach represents a novel method for automated superimposition on the basis of feature matching. The remaining 88 LCR pairs were used for landmark identification by three orthodontic experts to evaluate the accuracy of the two superimposition methods. The Euclidean distances of 17 hard tissue landmarks were measured and statistically compared after superimposition. Significant differences were observed in the superimposition error of most landmarks between the two methods (p < 0.05). The successful detection rate (SDR) of automatic superimposition of each landmark within the precision ranges of 1 mm, 2 mm, and 3 mm via the new method was higher than that via the SN method. The new automatic superimposition method is more accurate than the SN method and is a reliable method for superimposing adult LCRs, thus providing support for clinical or research work.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovating Challenges and Experiences in Emory Health AI Bias Datathon: Experience Report. 埃默里健康人工智能偏见数据马拉松的创新挑战和经验:经验报告。
Journal of imaging informatics in medicine Pub Date : 2025-02-25 DOI: 10.1007/s10278-024-01367-5
Atika Rahman Paddo, Saptarshi Purkayastha, Janice Newsome, Hari Trivedi, Judy Wawira Gichoya
{"title":"Innovating Challenges and Experiences in Emory Health AI Bias Datathon: Experience Report.","authors":"Atika Rahman Paddo, Saptarshi Purkayastha, Janice Newsome, Hari Trivedi, Judy Wawira Gichoya","doi":"10.1007/s10278-024-01367-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01367-5","url":null,"abstract":"<p><p>This paper presents an in-depth analysis of the Emory Health AI (Artificial Intelligence) Bias Datathon held in August 2023, providing insights into the experiences gained during the event. The datathon, focusing on health-related issues, attracted diverse participants, including professionals, researchers, and students from various backgrounds. The paper discusses the preparation, organization, and execution of the datathon, detailing the registration process, team formulation, dataset creation, and logistical aspects. We also explore the achievements and personal experiences of participants, highlighting their resilience, dedication, and innovative contributions. The findings include a breakdown of participant demographics, responses to post-event surveys, and participant backgrounds. Observing the trends, we believe the lessons learned, and the overall impact of the Emory Health AI Bias Datathon on the participants and the field of health data science will contribute significantly in organizing future datathons.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Method Using Deep Learning to Predict the Response to Cardiac Resynchronization Therapy. 利用深度学习预测心脏再同步化治疗反应的新方法。
Journal of imaging informatics in medicine Pub Date : 2025-02-20 DOI: 10.1007/s10278-024-01380-8
Kristoffer Larsen, Zhuo He, Fernando de A Fernandes, Xinwei Zhang, Chen Zhao, Qiuying Sha, Claudio T Mesquita, Diana Paez, Ernest V Garcia, Jiangang Zou, Amalia Peix, Guang-Uei Hung, Weihua Zhou
{"title":"A New Method Using Deep Learning to Predict the Response to Cardiac Resynchronization Therapy.","authors":"Kristoffer Larsen, Zhuo He, Fernando de A Fernandes, Xinwei Zhang, Chen Zhao, Qiuying Sha, Claudio T Mesquita, Diana Paez, Ernest V Garcia, Jiangang Zou, Amalia Peix, Guang-Uei Hung, Weihua Zhou","doi":"10.1007/s10278-024-01380-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01380-8","url":null,"abstract":"<p><p>Clinical parameters measured from gated single-photon emission computed tomography myocardial perfusion imaging (SPECT MPI) have value in predicting cardiac resynchronization therapy (CRT) patient outcomes, but still show limitations. The purpose of this study is to combine clinical variables, features from electrocardiogram (ECG), and parameters from assessment of cardiac function with polar maps from gated SPECT MPI through deep learning (DL) to predict CRT response. A total of 218 patients who underwent rest-gated SPECT MPI were enrolled in this study. CRT response was defined as an increase in left ventricular ejection fraction (LVEF) > 5% at a 6-month follow-up. A DL model was constructed by combining a pre-trained VGG16 model and a multilayer perceptron. Two modalities of data were input to the model: polar map images from SPECT MPI and tabular data from clinical features, ECG parameters, and SPECT-MPI-derived parameters. Gradient-weighted class activation mapping (Grad-CAM) was applied to the VGG16 model to provide explainability for the polar maps. For comparison, four machine learning (ML) models were trained using only the tabular features. Modeling was performed on 218 patients who underwent CRT implantation with a response rate of 55.5% (n = 121). The DL model demonstrated average AUC (0.83), accuracy (0.73), sensitivity (0.76), and specificity (0.69) surpassing ML models and guideline criteria. Guideline recommendations achieved accuracy (0.53), sensitivity (0.75), and specificity (0.26). The DL model trended towards improvement over the ML models, showcasing the additional predictive benefit of utilizing SPECT MPI polar maps. Incorporating additional patient data directly in the form of medical imagery can improve CRT response prediction.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised Label Generation for 3D Multi-modal MRI Bone Tumor Segmentation. 三维多模态MRI骨肿瘤分割的半监督标签生成。
Journal of imaging informatics in medicine Pub Date : 2025-02-20 DOI: 10.1007/s10278-025-01448-z
Anna Curto-Vilalta, Benjamin Schlossmacher, Christina Valle, Alexandra Gersing, Jan Neumann, Ruediger von Eisenhart-Rothe, Daniel Rueckert, Florian Hinterwimmer
{"title":"Semi-supervised Label Generation for 3D Multi-modal MRI Bone Tumor Segmentation.","authors":"Anna Curto-Vilalta, Benjamin Schlossmacher, Christina Valle, Alexandra Gersing, Jan Neumann, Ruediger von Eisenhart-Rothe, Daniel Rueckert, Florian Hinterwimmer","doi":"10.1007/s10278-025-01448-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01448-z","url":null,"abstract":"<p><p>Medical image segmentation is challenging due to the need for expert annotations and the variability of these manually created labels. Previous methods tackling label variability focus on 2D segmentation and single modalities, but reliable 3D multi-modal approaches are necessary for clinical applications such as in oncology. In this paper, we propose a framework for generating reliable and unbiased labels with minimal radiologist input for supervised 3D segmentation, reducing radiologists' efforts and variability in manual labeling. Our framework generates AI-assisted labels through a two-step process involving 3D multi-modal unsupervised segmentation based on feature clustering and semi-supervised refinement. These labels are then compared against traditional expert-generated labels in a downstream task consisting of 3D multi-modal bone tumor segmentation. Two 3D-Unet models are trained, one with manually created expert labels and the other with AI-assisted labels. Following this, a blind evaluation is performed on the segmentations of these two models to assess the reliability of training labels. The framework effectively generated accurate segmentation labels with minimal expert input, achieving state-of-the-art performance. The model trained with AI-assisted labels outperformed the baseline model in 61.67% of blind evaluations, indicating the enhancement of segmentation quality and demonstrating the potential of AI-assisted labeling to reduce radiologists' workload and improve label reliability for 3D multi-modal bone tumor segmentation. The code is available at https://github.com/acurtovilalta/3D_LabelGeneration .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of an Automated CAD System for Lesion Detection in DCE-MRI. DCE-MRI病变检测自动化CAD系统的开发。
Journal of imaging informatics in medicine Pub Date : 2025-02-20 DOI: 10.1007/s10278-025-01445-2
Theofilos Andreadis, Konstantinos Chouchos, Nikolaos Courcoutsakis, Ioannis Seimenis, Dimitrios Koulouriotis
{"title":"Development of an Automated CAD System for Lesion Detection in DCE-MRI.","authors":"Theofilos Andreadis, Konstantinos Chouchos, Nikolaos Courcoutsakis, Ioannis Seimenis, Dimitrios Koulouriotis","doi":"10.1007/s10278-025-01445-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01445-2","url":null,"abstract":"<p><p>Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has been recognized as an effective tool for early detection and characterization of breast lesions. This study proposes an automated computer-aided diagnosis (CAD) system to facilitate lesion detection in DCE-MRI. The system initially identifies and crops the breast tissue reducing the processed image region and, thus, resulting in lower computational burden. Then, Otsu's multilevel thresholding method is applied to detect and segment the suspicious regions of interest (ROIs), considering the dynamic enhancement changes across two post-contrast sequential phases. After segmentation, a two-stage false positive reduction process is applied. A rule-based stage is first applied, followed by the segmentation of control ROIs in the contralateral breast. A feature vector is then extracted from all ROIs and supervised classification is implemented using two classifiers (feed-forward backpropagation neural network (FFBPN) and support vector machine (SVM)). A dataset of 52 DCE-MRI exams was used for assessing the performance of the system in terms of accuracy, sensitivity, specificity, and precision. A total of 138 enhancing lesions were identified by an experienced radiologist and corresponded to CAD-detected ROIs. The system's overall sensitivity was 83% when the FFBPN classifier was used and 92% when the SVM was applied. Moreover, the calculated area under curve for the SVM classifier was 0.95. Both employed classifiers exhibited high performance in identifying enhancing lesions and in differentiating them from healthy parenchyma. Current results suggest that the employment of a CAD system can expedite lesion detection in DCE-MRI images and, therefore, further research over larger datasets is warranted.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Chest X-ray Diagnosis with a Multimodal Deep Learning Network by Integrating Clinical History to Refine Attention. 利用多模态深度学习网络整合临床病史以优化注意力,增强胸部x线诊断。
Journal of imaging informatics in medicine Pub Date : 2025-02-19 DOI: 10.1007/s10278-025-01446-1
Lian Yang, Yiliang Wan, Feng Pan
{"title":"Enhancing Chest X-ray Diagnosis with a Multimodal Deep Learning Network by Integrating Clinical History to Refine Attention.","authors":"Lian Yang, Yiliang Wan, Feng Pan","doi":"10.1007/s10278-025-01446-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01446-1","url":null,"abstract":"<p><p>The rapid advancements of deep learning technology have revolutionized medical imaging diagnosis. However, training these models is often challenged by label imbalance and the scarcity of certain diseases. Most models fail to recognize multiple coexisting diseases, which are common in real-world clinical scenarios. Moreover, most radiological models rely solely on image data, which contrasts with radiologists' comprehensive approach, incorporating both images and other clinical information such as clinical history and laboratory results. In this study, we introduce a Multimodal Chest X-ray Network (MCX-Net) that integrates chest X-ray images and clinical history texts for multi-label disease diagnosis. This integration is achieved by combining a pretrained text encoder, a pretrained image encoder, and a pretrained image-text cross-modal encoder, fine-tuned on the public MIMIC-CXR-JPG dataset, to diagnose 13 diverse lung diseases on chest X-rays. As a result, MCX-Net achieved the highest macro AUROC of 0.816 on the test set, significantly outperforming unimodal baselines such as ViT-base and ResNet152, which scored 0.747 and 0.749, respectively (p < 0.001). This multimodal approach represents a substantial advancement over existing image-based deep-learning diagnostic systems for chest X-rays.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NMTNet: A Multi-task Deep Learning Network for Joint Segmentation and Classification of Breast Tumors. NMTNet:用于乳腺肿瘤联合分割和分类的多任务深度学习网络。
Journal of imaging informatics in medicine Pub Date : 2025-02-19 DOI: 10.1007/s10278-025-01440-7
Xuelian Yang, Yuanjun Wang, Li Sui
{"title":"NMTNet: A Multi-task Deep Learning Network for Joint Segmentation and Classification of Breast Tumors.","authors":"Xuelian Yang, Yuanjun Wang, Li Sui","doi":"10.1007/s10278-025-01440-7","DOIUrl":"https://doi.org/10.1007/s10278-025-01440-7","url":null,"abstract":"<p><p>Segmentation and classification of breast tumors are two critical tasks since they provide significant information for computer-aided breast cancer diagnosis. Combining these tasks leverages their intrinsic relevance to enhance performance, but the variability and complexity of tumor characteristics remain challenging. We propose a novel multi-task deep learning network (NMTNet) for the joint segmentation and classification of breast tumors, which is based on a convolutional neural network (CNN) and U-shaped architecture. It mainly comprises a shared encoder, a multi-scale fusion channel refinement (MFCR) module, a segmentation branch, and a classification branch. First, ResNet18 is used as the backbone network in the encoding part to enhance the feature representation capability. Then, the MFCR module is introduced to enrich the feature depth and diversity. Besides, the segmentation branch combines a lesion region enhancement (LRE) module between the encoder and decoder parts, aiming to capture more detailed texture and edge information of irregular tumors to improve segmentation accuracy. The classification branch incorporates a fine-grained classifier that reuses valuable segmentation information to discriminate between benign and malignant tumors. The proposed NMTNet is evaluated on both ultrasound and magnetic resonance imaging datasets. It achieves segmentation dice scores of 90.30% and 91.50%, and Jaccard indices of 84.70% and 88.10% for each dataset, respectively. And the classification accuracy scores are 87.50% and 99.64% for the corresponding datasets, respectively. Experimental results demonstrate the superiority of NMTNet over state-of-the-art methods on breast tumor segmentation and classification tasks.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mainecoon: Implementing an Open-Source Web Viewer for DICOM Whole Slide Images with AI-Integrated PACS for Digital Pathology. Mainecoon:实现一个开源的Web浏览器,用于DICOM全幻灯片图像与人工智能集成的数字病理PACS。
Journal of imaging informatics in medicine Pub Date : 2025-02-18 DOI: 10.1007/s10278-025-01425-6
Chao-Wei Hsu, Si-Wei Yang, Yu-Ting Lee, Kai-Hsuan Yao, Tzu-Hsuan Hsu, Pau-Choo Chung, Yuan-Chia Chu, Chen-Tsung Kuo, Chung-Yueh Lien
{"title":"Mainecoon: Implementing an Open-Source Web Viewer for DICOM Whole Slide Images with AI-Integrated PACS for Digital Pathology.","authors":"Chao-Wei Hsu, Si-Wei Yang, Yu-Ting Lee, Kai-Hsuan Yao, Tzu-Hsuan Hsu, Pau-Choo Chung, Yuan-Chia Chu, Chen-Tsung Kuo, Chung-Yueh Lien","doi":"10.1007/s10278-025-01425-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01425-6","url":null,"abstract":"<p><p>The rapid advancement of digital pathology comes with significant challenges due to the diverse data formats from various scanning devices creating substantial obstacles to integrating artificial intelligence (AI) into the pathology imaging workflow. To overcome performance challenges posed by large AI-generated annotations, we developed an open-source project named Mainecoon for whole slide images (WSIs) using the Digital Imaging and Communications in Medicine (DICOM) standard. Our solution incorporates an AI model to detect non-alcoholic steatohepatitis (NASH) features in liver biopsies, validated with the DICOM Workgroup 26 Connectathon dataset. AI-generated results are encoded using the Microscopy Bulk Simple Annotations standard, which provides a standardized method supporting both manual and AI-generated annotations, promoting seamless integration of structured metadata with WSIs. We proposed a method by leveraging streaming and batch processing, significantly improving data loading efficiency, reducing user waiting times, and enhancing frontend performance. The web services of the AI model were implemented via the Flask framework, integrated with our viewer and an open-source medical image archive, Raccoon, with secure authentication provided by Keycloak for OAuth 2.0 authentication and node authentication at the National Cheng Kung University Hospital. Our architecture has demonstrated robustness, interoperability, and practical applicability, addressing real-world digital pathology challenges effectively.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Domain Self-Supervised Deep Learning with Graph Convolution for Low-Dose Computed Tomography Reconstruction. 基于图卷积的双域自监督深度学习在低剂量ct重建中的应用。
Journal of imaging informatics in medicine Pub Date : 2025-02-18 DOI: 10.1007/s10278-024-01314-4
Feng Yang, Feixiang Zhao, Yanhua Liu, Min Liu, Mingzhe Liu
{"title":"Dual-Domain Self-Supervised Deep Learning with Graph Convolution for Low-Dose Computed Tomography Reconstruction.","authors":"Feng Yang, Feixiang Zhao, Yanhua Liu, Min Liu, Mingzhe Liu","doi":"10.1007/s10278-024-01314-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01314-4","url":null,"abstract":"<p><p>X-ray computed tomography (CT) is a commonly used imaging modality in clinical practice. Recent years have seen increasing public concern regarding the ionizing radiation from CT. Low-dose CT (LDCT) has been proven to be effective in reducing patients' radiation exposure, but it results in CT images with low signal-to-noise ratio (SNR), failing to meet the image quality required for diagnosis. To enhance the SNR of LDCT images, numerous denoising strategies based on deep learning have been introduced, leading to notable advancements. Despite these advancements, most methods have relied on a supervised training paradigm. The challenge in acquiring aligned pairs of low-dose and normal-dose images in a clinical setting has limited their applicability. Recently, some self-supervised deep learning methods have enabled denoising using only noisy samples. However, these techniques are based on overly simplistic assumptions about noise and focus solely on CT sinogram denoising or image denoising, compromising their effectiveness. To address this, we introduce the Dual-Domain Self-supervised framework, termed DDoS, to accomplish effective LDCT denoising and reconstruction. The framework includes denoising in the sinogram domain, filtered back-projection reconstruction, and denoising in the image domain. By identifying the statistical characteristics of sinogram noise and CT image noise, we develop sinogram-denoising and CT image-denoising networks that are fully adapted to these characteristics. Both networks utilize a unified hybrid architecture that combines graph convolution and incorporates multiple channel attention modules, facilitating the extraction of local and non-local multi-scale features. Comprehensive experiments on two large-scale LDCT datasets demonstrate the superiority of DDoS framework over existing state-of-the-art methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Grading of Vesicoureteral Reflux (VUR) Using a Dual-Stream CNN Model with Deep Supervision. 基于深度监督的双流CNN模型的膀胱输尿管反流(VUR)自动分级
Journal of imaging informatics in medicine Pub Date : 2025-02-14 DOI: 10.1007/s10278-025-01438-1
Guangjie Chen, Lixian Su, Shuxin Wang, Xiaoqing Liu, Wenqian Wu, Fandong Zhang, Yijun Zhao, Linfeng Zhu, Hongbo Zhang, Xiaohao Wang, Gang Yu
{"title":"Automated Grading of Vesicoureteral Reflux (VUR) Using a Dual-Stream CNN Model with Deep Supervision.","authors":"Guangjie Chen, Lixian Su, Shuxin Wang, Xiaoqing Liu, Wenqian Wu, Fandong Zhang, Yijun Zhao, Linfeng Zhu, Hongbo Zhang, Xiaohao Wang, Gang Yu","doi":"10.1007/s10278-025-01438-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01438-1","url":null,"abstract":"<p><p>Vesicoureteral reflux (VUR) is a urinary system disorder characterized by the abnormal flow of urine from the bladder back into the ureters and kidneys, often leading to renal complications, particularly in children. Accurate grading of VUR, typically determined through voiding cystourethrography (VCUG), is crucial for effective clinical management and treatment planning. This study proposes a novel multi-head convolutional neural network for the automatic grading of VUR from VCUG images. The model employs a dual-stream architecture with a modified ResNet-50 backbone, enabling independent analysis of the left and right urinary tracts. Our approach categorizes VUR into three distinct classes: no reflux, mild to moderate reflux, and severe reflux. The incorporation of deep supervision within the network enhances feature learning and improves the model's ability to detect subtle variations in VUR patterns. Experimental results indicate that the proposed method effectively grades VUR, achieving an average area under the receiver operating characteristic curve of 0.82 and a patient-level accuracy of 0.84. This provides a reliable tool to support clinical decision-making in pediatric cases.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信