{"title":"Feasibility of Three-Dimension Chemical Exchange Saturation Transfer MRI for Predicting Tumor and Node Staging in Rectal Adenocarcinoma: An Exploration of Optimal ROI Measurement.","authors":"Xiao Wang, Wenguang Liu, Ismail Bilal Masokano, Weiyin Vivian Liu, Yigang Pei, Wenzheng Li","doi":"10.1007/s10278-024-01029-6","DOIUrl":"10.1007/s10278-024-01029-6","url":null,"abstract":"<p><p>To investigate the feasibility of predicting rectal adenocarcinoma (RA) tumor (T) and node (N) staging from an optimal ROI measurement using amide proton transfer weighted-signal intensity (APTw-SI) and magnetization transfer (MT) derived from three-dimensional chemical exchange saturation transfer(3D-CEST). Fifty-eight RA patients with pathological TN staging underwent 3D-CEST and DWI. APTw-SI, MT, and ADC values were measured using three ROI approaches (ss-ROI, ts-ROI, and wt-ROI) to analyze the TN staging (T staging, T1-2 vs T3-4; N staging, N - vs N +); the reproducibility of APTw-SI and MT was also evaluated. The AUC was used to assess the staging performance and determine the optimal ROI strategy. MT and APTw-SI yielded good excellent reproducibility with three ROIs, respectively. Significant differences in MT were observed (all P < 0.05) from various ROIs but not in APTw-SI and ADC (all P > 0.05) in the TN stage. AUCs of MT from ss-ROI were 0.860 (95% CI, 0.743-0.937) and 0.852 (95% CI, 0.735-0.932) for predicting T and N staging, which is similar to ts-ROI (T staging, 0.856 [95% CI, 0.739-0.934]; N staging, 0.831 [95% CI, 0.710-0.917]) and wt-ROI (T staging, 0.833 [95% CI, 0.712-0.918]; N staging, 0.848 [95% CI, 0.729-0.929]) (all P > 0.05). MT value of 3D-CEST has excellent TN staging predictive performance in RA patients with all three kinds of ROI methods. The ss-ROI is easy to operate and could be served as the preferred ROI approach for clinical and research applications of 3D-CEST imaging.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"946-956"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142142304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giorgio Cazzaniga, Fabio Del Carro, Albino Eccher, Jan Ulrich Becker, Giovanni Gambaro, Mattia Rossi, Federico Pieruzzi, Filippo Fraggetta, Fabio Pagni, Vincenzo L'Imperio
{"title":"Improving the Annotation Process in Computational Pathology: A Pilot Study with Manual and Semi-automated Approaches on Consumer and Medical Grade Devices.","authors":"Giorgio Cazzaniga, Fabio Del Carro, Albino Eccher, Jan Ulrich Becker, Giovanni Gambaro, Mattia Rossi, Federico Pieruzzi, Filippo Fraggetta, Fabio Pagni, Vincenzo L'Imperio","doi":"10.1007/s10278-024-01248-x","DOIUrl":"10.1007/s10278-024-01248-x","url":null,"abstract":"<p><p>The development of reliable artificial intelligence (AI) algorithms in pathology often depends on ground truth provided by annotation of whole slide images (WSI), a time-consuming and operator-dependent process. A comparative analysis of different annotation approaches is performed to streamline this process. Two pathologists annotated renal tissue using semi-automated (Segment Anything Model, SAM)) and manual devices (touchpad vs mouse). A comparison was conducted in terms of working time, reproducibility (overlap fraction), and precision (0 to 10 accuracy rated by two expert nephropathologists) among different methods and operators. The impact of different displays on mouse performance was evaluated. Annotations focused on three tissue compartments: tubules (57 annotations), glomeruli (53 annotations), and arteries (58 annotations). The semi-automatic approach was the fastest and had the least inter-observer variability, averaging 13.6 ± 0.2 min with a difference (Δ) of 2%, followed by the mouse (29.9 ± 10.2, Δ = 24%), and the touchpad (47.5 ± 19.6 min, Δ = 45%). The highest reproducibility in tubules and glomeruli was achieved with SAM (overlap values of 1 and 0.99 compared to 0.97 for the mouse and 0.94 and 0.93 for the touchpad), though SAM had lower reproducibility in arteries (overlap value of 0.89 compared to 0.94 for both the mouse and touchpad). No precision differences were observed between operators (p = 0.59). Using non-medical monitors increased annotation times by 6.1%. The future employment of semi-automated and AI-assisted approaches can significantly speed up the annotation process, improving the ground truth for AI tool development.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1112-1119"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950598/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Imtiyaz Ahmad, Vibhav Prakash Singh, Manoj Madhava Gore
{"title":"Detection of Diabetic Retinopathy Using Discrete Wavelet-Based Center-Symmetric Local Binary Pattern and Statistical Features.","authors":"Imtiyaz Ahmad, Vibhav Prakash Singh, Manoj Madhava Gore","doi":"10.1007/s10278-024-01243-2","DOIUrl":"10.1007/s10278-024-01243-2","url":null,"abstract":"<p><p>Computer-aided diagnosis (CAD) system assists ophthalmologists in early diabetic retinopathy (DR) detection by automating the analysis of retinal images, enabling timely intervention and treatment. This paper introduces a novel CAD system based on the global and multi-resolution analysis of retinal images. As a first step, we enhance the quality of the retinal images by applying a sequence of preprocessing techniques, which include the median filter, contrast limited adaptive histogram equalization (CLAHE), and the unsharp filter. These preprocessing steps effectively eliminate noise and enhance the contrast in the retinal images. Further, these images are represented at multi-scales using discrete wavelet transform (DWT), and center symmetric local binary pattern (CSLBP) features are extracted from each scale. The extracted CSLBP features from decomposed images capture the fine and coarse details of the retinal fundus images. Also, statistical features are extracted to capture the global characteristics and provide a comprehensive representation of retinal fundus images. The detection performances of these features are evaluated on a benchmark dataset using two machine learning models, i.e., SVM and k-NN, and found that the performance of the proposed work is considerably more encouraging than other existing methods. Furthermore, the results demonstrate that when wavelet-based CSLBP features are combined with statistical features, they yield notably improved detection performance compared to using these features individually.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1184-1211"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950458/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142142303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Differences in Topography of Individual Amyloid Brain Networks by Amyloid PET Images in Healthy Control, Mild Cognitive Impairment, and Alzheimer's Disease.","authors":"Tsung-Ying Ho, Shu-Hua Huang, Chi-Wei Huang, Kun-Ju Lin, Jung-Lung Hsu, Kuo-Lun Huang, Ko-Ting Chen, Chiung-Chih Chang, Ing-Tsung Hsiao, Sheng-Yao Huang","doi":"10.1007/s10278-024-01230-7","DOIUrl":"10.1007/s10278-024-01230-7","url":null,"abstract":"<p><p>Amyloid plaques, implicated in Alzheimer's disease, exhibit a spatial propagation pattern through interconnected brain regions, suggesting network-driven dissemination. This study utilizes PET imaging to investigate these brain connections and introduces an innovative method for analyzing the amyloid network. A modified version of a previously established method is applied to explore distinctive patterns of connectivity alterations across cognitive performance domains. PET images illustrate differences in amyloid accumulation, complemented by quantitative network indices. The normal control group shows minimal amyloid accumulation and preserved network connectivity. The MCI group displays intermediate amyloid deposits and partial similarity to normal controls and AD patients, reflecting the evolving nature of cognitive decline. Alzheimer's disease patients exhibit high amyloid levels and pronounced disruptions in network connectivity, which are reflected in low levels of global efficiency (Eg) and local efficiency (Eloc). It is mostly in the temporal lobe where connectivity alterations are found, particularly in regions related to memory and cognition. Network connectivity alterations, combined with amyloid PET imaging, show potential as discriminative markers for different cognitive states. Dataset-specific variations must be considered when interpreting connectivity patterns. The variability in MCI and AD overlap emphasizes the heterogeneity in cognitive decline progression, suggesting personalized approaches for neurodegenerative disorders. This study contributes to understanding the evolving network characteristics associated with normal cognition, MCI, and AD, offering valuable insights for developing diagnostic and prognostic markers.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"681-693"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting Pathological Characteristics of HER2-Positive Breast Cancer from Ultrasound Images: a Deep Ensemble Approach.","authors":"Zhi-Hui Chen, Hai-Ling Zha, Qing Yao, Wen-Bo Zhang, Guang-Quan Zhou, Cui-Ying Li","doi":"10.1007/s10278-024-01229-0","DOIUrl":"10.1007/s10278-024-01229-0","url":null,"abstract":"<p><p>The objective is to evaluate the feasibility of utilizing ultrasound images in identifying critical prognostic biomarkers for HER2-positive breast cancer (HER2 + BC). This study enrolled 512 female patients diagnosed with HER2-positive breast cancer through pathological validation at our institution from January 2016 to December 2021. Five distinct deep convolutional neural networks (DCNNs) and a deep ensemble (DE) approach were trained to classify axillary lymph node involvement (ALNM), lymphovascular invasion (LVI), and histological grade (HG). The efficacy of the models was evaluated based on accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), receiver operating characteristic (ROC) curves, areas under the ROC curve (AUCs), and heat maps. DeLong test was applied to compare differences in AUC among different models. The deep ensemble approach, as the most effective model, demonstrated AUCs and accuracy of 0.869 (95% CI: 0.802-0.936) and 69.7% in LVI, 0.973 (95% CI: 0.949-0.998) and 73.8% in HG, thus providing superior classification performance in the context of imbalanced data (p < 0.05 by the DeLong test). On ALNM, AUC and accuracy were 0.780 (95% CI: 0.688-0.873) and 77.5%, which were comparable to other single models. The pretreatment US-based DE model could hold promise as a clinical guidance for predicting pathological characteristics of patients with HER2-positive breast cancer, thereby providing benefit of facilitating timely adjustments in treatment strategies.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"850-857"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950582/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yueyan Wang, Aiqi Chen, Kai Wang, Yihui Zhao, Xiaomeng Du, Yan Chen, Lei Lv, Yimin Huang, Yichuan Ma
{"title":"Predictive Study of Machine Learning-Based Multiparametric MRI Radiomics Nomogram for Perineural Invasion in Rectal Cancer: A Pilot Study.","authors":"Yueyan Wang, Aiqi Chen, Kai Wang, Yihui Zhao, Xiaomeng Du, Yan Chen, Lei Lv, Yimin Huang, Yichuan Ma","doi":"10.1007/s10278-024-01231-6","DOIUrl":"10.1007/s10278-024-01231-6","url":null,"abstract":"<p><p>This study aimed to establish and validate the efficacy of a nomogram model, synthesized through the integration of multi-parametric magnetic resonance radiomics and clinical risk factors, for forecasting perineural invasion in rectal cancer. We retrospectively collected data from 108 patients with pathologically confirmed rectal adenocarcinoma who underwent preoperative multiparametric MRI at the First Affiliated Hospital of Bengbu Medical College between April 2019 and August 2023. This dataset was subsequently divided into training and validation sets following a ratio of 7:3. Both univariate and multivariate logistic regression analyses were implemented to identify independent clinical risk factors associated with perineural invasion (PNI) in rectal cancer. We manually delineated the region of interest (ROI) layer-by-layer on T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) sequences and extracted the image features. Five machine learning algorithms were used to construct radiomics model with the features selected by least absolute shrinkage and selection operator (LASSO) method. The optimal radiomics model was then selected and combined with clinical features to formulate a nomogram model. The model performance was evaluated using receiver operating characteristic (ROC) curve analysis, and its clinical value was assessed via decision curve analysis (DCA). Our final selection comprised 10 optimal radiological features and the SVM model showcased superior predictive efficiency and robustness among the five classifiers. The area under the curve (AUC) values of the nomogram model were 0.945 (0.899, 0.991) and 0.846 (0.703, 0.99) for the training and validation sets, respectively. The nomogram model developed in this study exhibited excellent predictive performance in foretelling PNI of rectal cancer, thereby offering valuable guidance for clinical decision-making. The nomogram could predict the perineural invasion status of rectal cancer in early stage.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1224-1235"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950464/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feixiang Zhao, Mingzhe Liu, Mingrong Xiang, Dongfen Li, Xin Jiang, Xiance Jin, Cai Lin, Ruili Wang
{"title":"Unsupervised and Self-supervised Learning in Low-Dose Computed Tomography Denoising: Insights from Training Strategies.","authors":"Feixiang Zhao, Mingzhe Liu, Mingrong Xiang, Dongfen Li, Xin Jiang, Xiance Jin, Cai Lin, Ruili Wang","doi":"10.1007/s10278-024-01213-8","DOIUrl":"10.1007/s10278-024-01213-8","url":null,"abstract":"<p><p>In recent years, X-ray low-dose computed tomography (LDCT) has garnered widespread attention due to its significant reduction in the risk of patient radiation exposure. However, LDCT images often contain a substantial amount of noises, adversely affecting diagnostic quality. To mitigate this, a plethora of LDCT denoising methods have been proposed. Among them, deep learning (DL) approaches have emerged as the most effective, due to their robust feature extraction capabilities. Yet, the prevalent use of supervised training paradigms is often impractical due to the challenges in acquiring low-dose and normal-dose CT pairs in clinical settings. Consequently, unsupervised and self-supervised deep learning methods have been introduced for LDCT denoising, showing considerable potential for clinical applications. These methods' efficacy hinges on training strategies. Notably, there appears to be no comprehensive reviews of these strategies. Our review aims to address this gap, offering insights and guidance for researchers and practitioners. Based on training strategies, we categorize the LDCT methods into six groups: (i) cycle consistency-based, (ii) score matching-based, (iii) statistical characteristics of noise-based, (iv) similarity-based, (v) LDCT synthesis model-based, and (vi) hybrid methods. For each category, we delve into the theoretical underpinnings, training strategies, strengths, and limitations. In addition, we also summarize the open source codes of the reviewed methods. Finally, the review concludes with a discussion on open issues and future research directions.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"902-930"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950483/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chongxuan Tian, Yue Xi, Yuting Ma, Cai Chen, Cong Wu, Kun Ru, Wei Li, Miaoqing Zhao
{"title":"Harnessing Deep Learning for Accurate Pathological Assessment of Brain Tumor Cell Types.","authors":"Chongxuan Tian, Yue Xi, Yuting Ma, Cai Chen, Cong Wu, Kun Ru, Wei Li, Miaoqing Zhao","doi":"10.1007/s10278-024-01107-9","DOIUrl":"10.1007/s10278-024-01107-9","url":null,"abstract":"<p><p>Primary diffuse central nervous system large B-cell lymphoma (CNS-pDLBCL) and high-grade glioma (HGG) often present similarly, clinically and on imaging, making differentiation challenging. This similarity can complicate pathologists' diagnostic efforts, yet accurately distinguishing between these conditions is crucial for guiding treatment decisions. This study leverages a deep learning model to classify brain tumor pathology images, addressing the common issue of limited medical imaging data. Instead of training a convolutional neural network (CNN) from scratch, we employ a pre-trained network for extracting deep features, which are then used by a support vector machine (SVM) for classification. Our evaluation shows that the Resnet50 (TL + SVM) model achieves a 97.4% accuracy, based on tenfold cross-validation on the test set. These results highlight the synergy between deep learning and traditional diagnostics, potentially setting a new standard for accuracy and efficiency in the pathological diagnosis of brain tumors.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1098-1111"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tony Felefly, Ziad Francis, Camille Roukoz, Georges Fares, Samir Achkar, Sandrine Yazbeck, Antoine Nasr, Manal Kordahi, Fares Azoury, Dolly Nehme Nasr, Elie Nasr, Georges Noël
{"title":"A 3D Convolutional Neural Network Based on Non-enhanced Brain CT to Identify Patients with Brain Metastases.","authors":"Tony Felefly, Ziad Francis, Camille Roukoz, Georges Fares, Samir Achkar, Sandrine Yazbeck, Antoine Nasr, Manal Kordahi, Fares Azoury, Dolly Nehme Nasr, Elie Nasr, Georges Noël","doi":"10.1007/s10278-024-01240-5","DOIUrl":"10.1007/s10278-024-01240-5","url":null,"abstract":"<p><p>Dedicated brain imaging for cancer patients is seldom recommended in the absence of symptoms. There is increasing availability of non-enhanced CT (NE-CT) of the brain, mainly owing to a wider utilization of Positron Emission Tomography-CT (PET-CT) in cancer staging. Brain metastases (BM) are often hard to diagnose on NE-CT. This work aims to develop a 3D Convolutional Neural Network (3D-CNN) based on brain NE-CT to distinguish patients with and without BM. We retrospectively included NE-CT scans for 100 patients with single or multiple BM and 100 patients without brain imaging abnormalities. Patients whose largest lesion was < 5 mm were excluded. The largest tumor was manually segmented on a matched contrast-enhanced T1 weighted Magnetic Resonance Imaging (MRI), and shape radiomics were extracted to determine the size and volume of the lesion. The brain was automatically segmented, and masked images were normalized and resampled. The dataset was split into training (70%) and validation (30%) sets. Multiple versions of a 3D-CNN were developed, and the best model was selected based on accuracy (ACC) on the validation set. The median largest tumor Maximum-3D-Diameter was 2.29 cm, and its median volume was 2.81 cc. Solitary BM were found in 27% of the patients, while 49% had > 5 BMs. The best model consisted of 4 convolutional layers with 3D average pooling layers, dropout layers of 50%, and a sigmoid activation function. Mean validation ACC was 0.983 (SD: 0.020) and mean area under receiver-operating characteristic curve was 0.983 (SD: 0.023). Sensitivity was 0.983 (SD: 0.020). We developed an accurate 3D-CNN based on brain NE-CT to differentiate between patients with and without BM. The model merits further external validation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"858-864"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DECNet: Left Atrial Pulmonary Vein Class Imbalance Classification Network.","authors":"GuoDong Zhang, WenWen Gu, TingYu Liang, YanLin Li, Wei Guo, ZhaoXuan Gong, RongHui Ju","doi":"10.1007/s10278-024-01221-8","DOIUrl":"10.1007/s10278-024-01221-8","url":null,"abstract":"<p><p>In clinical practice, the anatomical classification of pulmonary veins plays a crucial role in the preoperative assessment of atrial fibrillation radiofrequency ablation surgery. Accurate classification of pulmonary vein anatomy assists physicians in selecting appropriate mapping electrodes and avoids causing pulmonary arterial hypertension. Due to the diverse and subtly different anatomical classifications of pulmonary veins, as well as the imbalance in data distribution, deep learning models often exhibit poor expression capability in extracting deep features, leading to misjudgments and affecting classification accuracy. Therefore, in order to solve the problem of unbalanced classification of left atrial pulmonary veins, this paper proposes a network integrating multi-scale feature-enhanced attention and dual-feature extraction classifiers, called DECNet. The multi-scale feature-enhanced attention utilizes multi-scale information to guide the reinforcement of deep features, generating channel weights and spatial weights to enhance the expression capability of deep features. The dual-feature extraction classifier assigns a fixed number of channels to each category, equally evaluating all categories, thus alleviating the learning bias and overfitting caused by data imbalance. By combining the two, the expression capability of deep features is strengthened, achieving accurate classification of left atrial pulmonary vein morphology and providing support for subsequent clinical treatment. The proposed method is evaluated on datasets provided by the People's Hospital of Liaoning Province and the publicly available DermaMNIST dataset, achieving average accuracies of 78.81% and 83.44%, respectively, demonstrating the effectiveness of the proposed approach.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"819-837"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}