Joo Chan Choi, Young Jae Kim, Kwang Gi Kim, Eun Young Kim
{"title":"An Analysis of the Efficacy of Deep Learning-Based Pectoralis Muscle Segmentation in Chest CT for Sarcopenia Diagnosis.","authors":"Joo Chan Choi, Young Jae Kim, Kwang Gi Kim, Eun Young Kim","doi":"10.1007/s10278-025-01443-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01443-4","url":null,"abstract":"<p><p>Sarcopenia is the loss of skeletal muscle function and mass and is a poor prognostic factor. This condition is typically diagnosed by measuring skeletal muscle mass at the L3 level. Chest computed tomography (CT) scans do not include the L3 level. We aimed to determine if these scans can be used to diagnose sarcopenia and thus guide patient management and treatment decisions. This study compared the ResNet-UNet, Recurrent Residual UNet, and UNet3 + models for segmenting and measuring the pectoralis muscle area in chest CT images. A total of 4932 chest CT images were collected from 1644 patients, and additional abdominal CT data were collected from 294 patients. The performance of the models was evaluated using the dice similarity coefficient (DSC), accuracy, sensitivity, and specificity. Furthermore, the correlation between the segmented pectoralis and L3 muscle areas was compared using linear regression analysis. All three models demonstrated a high segmentation performance, with the UNet3 + model achieving the best performance (DSC 0.95 ± 0.03). Pearson correlation coefficient between the pectoralis and L3 muscle areas showed a significant positive correlation (r = 0.65). The correlation coefficient between the transformed pectoralis and L3 muscle areas showed a stronger positive correlation in both univariate analysis using only muscle area (r = 0.74) and multivariate analysis considering sex, weight, age, and muscle area (r = 0.83). Segmentation of the pectoralis muscle area using artificial intelligence (AI) on chest CT was highly accurate, and the measured values showed a strong correlation with the L3 muscle area. Chest CT using AI technology could play a significant role in the diagnosis of sarcopenia.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ultrasound Thyroid Nodule Segmentation Algorithm Based on DeepLabV3+ with EfficientNet.","authors":"Nan Xiao, Demin Kong, Junfeng Wang","doi":"10.1007/s10278-025-01436-3","DOIUrl":"https://doi.org/10.1007/s10278-025-01436-3","url":null,"abstract":"<p><p>Ultrasound is widely used to monitor and diagnose thyroid nodules, but accurately segmenting these nodules in ultrasound images remains a challenge due to the presence of noise and artifacts, which often blur nodule boundaries. While several deep learning algorithms have been developed for this task, their performance is frequently suboptimal. In this study, we introduce the use of EfficientNet-B7 as the backbone for the DeepLabV3+ architecture in thyroid nodule segmentation, marking its first application in this area. We evaluated the proposed method using a dataset from the First Affiliated Hospital of Zhengzhou University, along with two public datasets. The results demonstrate high performance, with a pixel accuracy (PA) of 97.67%, a Dice similarity coefficient of 0.8839, and an Intersection over Union (IoU) of 79.69%. These outcomes outperform most traditional segmentation networks.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ling Zhao, Juneng Huang, Min Tang, Xuejun Zhang, Lijuan Xiao, Renchuan Tao
{"title":"Evaluation of an Automatic Cephalometric Superimposition Method Based on Feature Matching.","authors":"Ling Zhao, Juneng Huang, Min Tang, Xuejun Zhang, Lijuan Xiao, Renchuan Tao","doi":"10.1007/s10278-025-01447-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01447-0","url":null,"abstract":"<p><p>The objective of the study is to establish a novel method for automatic cephalometric superimposition on the basis of feature matching and compare it with the commonly used Sella-Nasion (SN) superimposition method. A total of 178 pairs of pre- (T1) and post-treatment (T2) lateral cephalometric radiographs (LCRs) from adult orthodontic patients were collected. Ninety LCR pairs were used to train the you only look once version 8 (YOLOv8) model to automatically recognize stable cranial reference areas. This approach represents a novel method for automated superimposition on the basis of feature matching. The remaining 88 LCR pairs were used for landmark identification by three orthodontic experts to evaluate the accuracy of the two superimposition methods. The Euclidean distances of 17 hard tissue landmarks were measured and statistically compared after superimposition. Significant differences were observed in the superimposition error of most landmarks between the two methods (p < 0.05). The successful detection rate (SDR) of automatic superimposition of each landmark within the precision ranges of 1 mm, 2 mm, and 3 mm via the new method was higher than that via the SN method. The new automatic superimposition method is more accurate than the SN method and is a reliable method for superimposing adult LCRs, thus providing support for clinical or research work.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Innovating Challenges and Experiences in Emory Health AI Bias Datathon: Experience Report.","authors":"Atika Rahman Paddo, Saptarshi Purkayastha, Janice Newsome, Hari Trivedi, Judy Wawira Gichoya","doi":"10.1007/s10278-024-01367-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01367-5","url":null,"abstract":"<p><p>This paper presents an in-depth analysis of the Emory Health AI (Artificial Intelligence) Bias Datathon held in August 2023, providing insights into the experiences gained during the event. The datathon, focusing on health-related issues, attracted diverse participants, including professionals, researchers, and students from various backgrounds. The paper discusses the preparation, organization, and execution of the datathon, detailing the registration process, team formulation, dataset creation, and logistical aspects. We also explore the achievements and personal experiences of participants, highlighting their resilience, dedication, and innovative contributions. The findings include a breakdown of participant demographics, responses to post-event surveys, and participant backgrounds. Observing the trends, we believe the lessons learned, and the overall impact of the Emory Health AI Bias Datathon on the participants and the field of health data science will contribute significantly in organizing future datathons.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristoffer Larsen, Zhuo He, Fernando de A Fernandes, Xinwei Zhang, Chen Zhao, Qiuying Sha, Claudio T Mesquita, Diana Paez, Ernest V Garcia, Jiangang Zou, Amalia Peix, Guang-Uei Hung, Weihua Zhou
{"title":"A New Method Using Deep Learning to Predict the Response to Cardiac Resynchronization Therapy.","authors":"Kristoffer Larsen, Zhuo He, Fernando de A Fernandes, Xinwei Zhang, Chen Zhao, Qiuying Sha, Claudio T Mesquita, Diana Paez, Ernest V Garcia, Jiangang Zou, Amalia Peix, Guang-Uei Hung, Weihua Zhou","doi":"10.1007/s10278-024-01380-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01380-8","url":null,"abstract":"<p><p>Clinical parameters measured from gated single-photon emission computed tomography myocardial perfusion imaging (SPECT MPI) have value in predicting cardiac resynchronization therapy (CRT) patient outcomes, but still show limitations. The purpose of this study is to combine clinical variables, features from electrocardiogram (ECG), and parameters from assessment of cardiac function with polar maps from gated SPECT MPI through deep learning (DL) to predict CRT response. A total of 218 patients who underwent rest-gated SPECT MPI were enrolled in this study. CRT response was defined as an increase in left ventricular ejection fraction (LVEF) > 5% at a 6-month follow-up. A DL model was constructed by combining a pre-trained VGG16 model and a multilayer perceptron. Two modalities of data were input to the model: polar map images from SPECT MPI and tabular data from clinical features, ECG parameters, and SPECT-MPI-derived parameters. Gradient-weighted class activation mapping (Grad-CAM) was applied to the VGG16 model to provide explainability for the polar maps. For comparison, four machine learning (ML) models were trained using only the tabular features. Modeling was performed on 218 patients who underwent CRT implantation with a response rate of 55.5% (n = 121). The DL model demonstrated average AUC (0.83), accuracy (0.73), sensitivity (0.76), and specificity (0.69) surpassing ML models and guideline criteria. Guideline recommendations achieved accuracy (0.53), sensitivity (0.75), and specificity (0.26). The DL model trended towards improvement over the ML models, showcasing the additional predictive benefit of utilizing SPECT MPI polar maps. Incorporating additional patient data directly in the form of medical imagery can improve CRT response prediction.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Curto-Vilalta, Benjamin Schlossmacher, Christina Valle, Alexandra Gersing, Jan Neumann, Ruediger von Eisenhart-Rothe, Daniel Rueckert, Florian Hinterwimmer
{"title":"Semi-supervised Label Generation for 3D Multi-modal MRI Bone Tumor Segmentation.","authors":"Anna Curto-Vilalta, Benjamin Schlossmacher, Christina Valle, Alexandra Gersing, Jan Neumann, Ruediger von Eisenhart-Rothe, Daniel Rueckert, Florian Hinterwimmer","doi":"10.1007/s10278-025-01448-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01448-z","url":null,"abstract":"<p><p>Medical image segmentation is challenging due to the need for expert annotations and the variability of these manually created labels. Previous methods tackling label variability focus on 2D segmentation and single modalities, but reliable 3D multi-modal approaches are necessary for clinical applications such as in oncology. In this paper, we propose a framework for generating reliable and unbiased labels with minimal radiologist input for supervised 3D segmentation, reducing radiologists' efforts and variability in manual labeling. Our framework generates AI-assisted labels through a two-step process involving 3D multi-modal unsupervised segmentation based on feature clustering and semi-supervised refinement. These labels are then compared against traditional expert-generated labels in a downstream task consisting of 3D multi-modal bone tumor segmentation. Two 3D-Unet models are trained, one with manually created expert labels and the other with AI-assisted labels. Following this, a blind evaluation is performed on the segmentations of these two models to assess the reliability of training labels. The framework effectively generated accurate segmentation labels with minimal expert input, achieving state-of-the-art performance. The model trained with AI-assisted labels outperformed the baseline model in 61.67% of blind evaluations, indicating the enhancement of segmentation quality and demonstrating the potential of AI-assisted labeling to reduce radiologists' workload and improve label reliability for 3D multi-modal bone tumor segmentation. The code is available at https://github.com/acurtovilalta/3D_LabelGeneration .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theofilos Andreadis, Konstantinos Chouchos, Nikolaos Courcoutsakis, Ioannis Seimenis, Dimitrios Koulouriotis
{"title":"Development of an Automated CAD System for Lesion Detection in DCE-MRI.","authors":"Theofilos Andreadis, Konstantinos Chouchos, Nikolaos Courcoutsakis, Ioannis Seimenis, Dimitrios Koulouriotis","doi":"10.1007/s10278-025-01445-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01445-2","url":null,"abstract":"<p><p>Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has been recognized as an effective tool for early detection and characterization of breast lesions. This study proposes an automated computer-aided diagnosis (CAD) system to facilitate lesion detection in DCE-MRI. The system initially identifies and crops the breast tissue reducing the processed image region and, thus, resulting in lower computational burden. Then, Otsu's multilevel thresholding method is applied to detect and segment the suspicious regions of interest (ROIs), considering the dynamic enhancement changes across two post-contrast sequential phases. After segmentation, a two-stage false positive reduction process is applied. A rule-based stage is first applied, followed by the segmentation of control ROIs in the contralateral breast. A feature vector is then extracted from all ROIs and supervised classification is implemented using two classifiers (feed-forward backpropagation neural network (FFBPN) and support vector machine (SVM)). A dataset of 52 DCE-MRI exams was used for assessing the performance of the system in terms of accuracy, sensitivity, specificity, and precision. A total of 138 enhancing lesions were identified by an experienced radiologist and corresponded to CAD-detected ROIs. The system's overall sensitivity was 83% when the FFBPN classifier was used and 92% when the SVM was applied. Moreover, the calculated area under curve for the SVM classifier was 0.95. Both employed classifiers exhibited high performance in identifying enhancing lesions and in differentiating them from healthy parenchyma. Current results suggest that the employment of a CAD system can expedite lesion detection in DCE-MRI images and, therefore, further research over larger datasets is warranted.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Chest X-ray Diagnosis with a Multimodal Deep Learning Network by Integrating Clinical History to Refine Attention.","authors":"Lian Yang, Yiliang Wan, Feng Pan","doi":"10.1007/s10278-025-01446-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01446-1","url":null,"abstract":"<p><p>The rapid advancements of deep learning technology have revolutionized medical imaging diagnosis. However, training these models is often challenged by label imbalance and the scarcity of certain diseases. Most models fail to recognize multiple coexisting diseases, which are common in real-world clinical scenarios. Moreover, most radiological models rely solely on image data, which contrasts with radiologists' comprehensive approach, incorporating both images and other clinical information such as clinical history and laboratory results. In this study, we introduce a Multimodal Chest X-ray Network (MCX-Net) that integrates chest X-ray images and clinical history texts for multi-label disease diagnosis. This integration is achieved by combining a pretrained text encoder, a pretrained image encoder, and a pretrained image-text cross-modal encoder, fine-tuned on the public MIMIC-CXR-JPG dataset, to diagnose 13 diverse lung diseases on chest X-rays. As a result, MCX-Net achieved the highest macro AUROC of 0.816 on the test set, significantly outperforming unimodal baselines such as ViT-base and ResNet152, which scored 0.747 and 0.749, respectively (p < 0.001). This multimodal approach represents a substantial advancement over existing image-based deep-learning diagnostic systems for chest X-rays.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NMTNet: A Multi-task Deep Learning Network for Joint Segmentation and Classification of Breast Tumors.","authors":"Xuelian Yang, Yuanjun Wang, Li Sui","doi":"10.1007/s10278-025-01440-7","DOIUrl":"https://doi.org/10.1007/s10278-025-01440-7","url":null,"abstract":"<p><p>Segmentation and classification of breast tumors are two critical tasks since they provide significant information for computer-aided breast cancer diagnosis. Combining these tasks leverages their intrinsic relevance to enhance performance, but the variability and complexity of tumor characteristics remain challenging. We propose a novel multi-task deep learning network (NMTNet) for the joint segmentation and classification of breast tumors, which is based on a convolutional neural network (CNN) and U-shaped architecture. It mainly comprises a shared encoder, a multi-scale fusion channel refinement (MFCR) module, a segmentation branch, and a classification branch. First, ResNet18 is used as the backbone network in the encoding part to enhance the feature representation capability. Then, the MFCR module is introduced to enrich the feature depth and diversity. Besides, the segmentation branch combines a lesion region enhancement (LRE) module between the encoder and decoder parts, aiming to capture more detailed texture and edge information of irregular tumors to improve segmentation accuracy. The classification branch incorporates a fine-grained classifier that reuses valuable segmentation information to discriminate between benign and malignant tumors. The proposed NMTNet is evaluated on both ultrasound and magnetic resonance imaging datasets. It achieves segmentation dice scores of 90.30% and 91.50%, and Jaccard indices of 84.70% and 88.10% for each dataset, respectively. And the classification accuracy scores are 87.50% and 99.64% for the corresponding datasets, respectively. Experimental results demonstrate the superiority of NMTNet over state-of-the-art methods on breast tumor segmentation and classification tasks.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mainecoon: Implementing an Open-Source Web Viewer for DICOM Whole Slide Images with AI-Integrated PACS for Digital Pathology.","authors":"Chao-Wei Hsu, Si-Wei Yang, Yu-Ting Lee, Kai-Hsuan Yao, Tzu-Hsuan Hsu, Pau-Choo Chung, Yuan-Chia Chu, Chen-Tsung Kuo, Chung-Yueh Lien","doi":"10.1007/s10278-025-01425-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01425-6","url":null,"abstract":"<p><p>The rapid advancement of digital pathology comes with significant challenges due to the diverse data formats from various scanning devices creating substantial obstacles to integrating artificial intelligence (AI) into the pathology imaging workflow. To overcome performance challenges posed by large AI-generated annotations, we developed an open-source project named Mainecoon for whole slide images (WSIs) using the Digital Imaging and Communications in Medicine (DICOM) standard. Our solution incorporates an AI model to detect non-alcoholic steatohepatitis (NASH) features in liver biopsies, validated with the DICOM Workgroup 26 Connectathon dataset. AI-generated results are encoded using the Microscopy Bulk Simple Annotations standard, which provides a standardized method supporting both manual and AI-generated annotations, promoting seamless integration of structured metadata with WSIs. We proposed a method by leveraging streaming and batch processing, significantly improving data loading efficiency, reducing user waiting times, and enhancing frontend performance. The web services of the AI model were implemented via the Flask framework, integrated with our viewer and an open-source medical image archive, Raccoon, with secure authentication provided by Keycloak for OAuth 2.0 authentication and node authentication at the National Cheng Kung University Hospital. Our architecture has demonstrated robustness, interoperability, and practical applicability, addressing real-world digital pathology challenges effectively.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}