Hamed Aghapanah , Reza Rasti , Saeed Kermani , Faezeh Tabesh , Hossein Yousefi Banaem , Hamidreza Pour Aliakbar , Hamid Sanei , William Paul Segars
{"title":"CardSegNet: An adaptive hybrid CNN-vision transformer model for heart region segmentation in cardiac MRI","authors":"Hamed Aghapanah , Reza Rasti , Saeed Kermani , Faezeh Tabesh , Hossein Yousefi Banaem , Hamidreza Pour Aliakbar , Hamid Sanei , William Paul Segars","doi":"10.1016/j.compmedimag.2024.102382","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102382","url":null,"abstract":"<div><p>Cardiovascular MRI (CMRI) is a non-invasive imaging technique adopted for assessing the blood circulatory system’s structure and function. Precise image segmentation is required to measure cardiac parameters and diagnose abnormalities through CMRI data. Because of anatomical heterogeneity and image variations, cardiac image segmentation is a challenging task. Quantification of cardiac parameters requires high-performance segmentation of the left ventricle (LV), right ventricle (RV), and left ventricle myocardium from the background. The first proposed solution here is to manually segment the regions, which is a time-consuming and error-prone procedure. In this context, many semi- or fully automatic solutions have been proposed recently, among which deep learning-based methods have revealed high performance in segmenting regions in CMRI data. In this study, a self-adaptive multi attention (SMA) module is introduced to adaptively leverage multiple attention mechanisms for better segmentation. The convolutional-based position and channel attention mechanisms with a patch tokenization-based vision transformer (ViT)-based attention mechanism in a hybrid and end-to-end manner are integrated into the SMA. The CNN- and ViT-based attentions mine the short- and long-range dependencies for more precise segmentation. The SMA module is applied in an encoder-decoder structure with a ResNet50 backbone named CardSegNet. Furthermore, a deep supervision method with multi-loss functions is introduced to the CardSegNet optimizer to reduce overfitting and enhance the model’s performance. The proposed model is validated on the ACDC2017 (n=100), M&Ms (n=321), and a local dataset (n=22) using the 10-fold cross-validation method with promising segmentation results, demonstrating its outperformance versus its counterparts.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102382"},"PeriodicalIF":5.7,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140618093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luyu Tang , Songhui Diao , Chao Li , Miaoxia He , Kun Ru , Wenjian Qin
{"title":"Global contextual representation via graph-transformer fusion for hepatocellular carcinoma prognosis in whole-slide images","authors":"Luyu Tang , Songhui Diao , Chao Li , Miaoxia He , Kun Ru , Wenjian Qin","doi":"10.1016/j.compmedimag.2024.102378","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102378","url":null,"abstract":"<div><p>Current methods of digital pathological images typically employ small image patches to learn local representative features to overcome the issues of computationally heavy and memory limitations. However, the global contextual features are not fully considered in whole-slide images (WSIs). Here, we designed a hybrid model that utilizes Graph Neural Network (GNN) module and Transformer module for the representation of global contextual features, called TransGNN. GNN module built a WSI-Graph for the foreground area of a WSI for explicitly capturing structural features, and the Transformer module through the self-attention mechanism implicitly learned the global context information. The prognostic markers of hepatocellular carcinoma (HCC) prognostic biomarkers were used to illustrate the importance of global contextual information in cancer histopathological analysis. Our model was validated using 362 WSIs from 355 HCC patients diagnosed from The Cancer Genome Atlas (TCGA). It showed impressive performance with a Concordance Index (C-Index) of 0.7308 (95% Confidence Interval (CI): (0.6283–0.8333)) for overall survival prediction and achieved the best performance among all models. Additionally, our model achieved an area under curve of 0.7904, 0.8087, and 0.8004 for 1-year, 3-year, and 5-year survival predictions, respectively. We further verified the superior performance of our model in HCC risk stratification and its clinical value through Kaplan–Meier curve and univariate and multivariate COX regression analysis. Our research demonstrated that TransGNN effectively utilized the context information of WSIs and contributed to the clinical prognostic evaluation of HCC.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102378"},"PeriodicalIF":5.7,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140604545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenhao Zhong , Heye Zhang , Zhifan Gao , William Kongto Hau , Guang Yang , Xiujian Liu , Lin Xu
{"title":"Distraction-aware hierarchical learning for vascular structure segmentation in intravascular ultrasound images","authors":"Wenhao Zhong , Heye Zhang , Zhifan Gao , William Kongto Hau , Guang Yang , Xiujian Liu , Lin Xu","doi":"10.1016/j.compmedimag.2024.102381","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102381","url":null,"abstract":"<div><p>Vascular structure segmentation in intravascular ultrasound (IVUS) images plays an important role in pre-procedural evaluation of percutaneous coronary intervention (PCI). However, vascular structure segmentation in IVUS images has the challenge of structure-dependent distractions. Structure-dependent distractions are categorized into two cases, structural intrinsic distractions and inter-structural distractions. Traditional machine learning methods often rely solely on low-level features, overlooking high-level features. This way limits the generalization of these methods. The existing semantic segmentation methods integrate low-level and high-level features to enhance generalization performance. But these methods also introduce additional interference, which is harmful to solving structural intrinsic distractions. Distraction cue methods attempt to address structural intrinsic distractions by removing interference from the features through a unique decoder. However, they tend to overlook the problem of inter-structural distractions. In this paper, we propose distraction-aware hierarchical learning (DHL) for vascular structure segmentation in IVUS images. Inspired by distraction cue methods for removing interference in a decoder, the DHL is designed as a hierarchical decoder that gradually removes structure-dependent distractions. The DHL includes global perception process, distraction perception process and structural perception process. The global perception process and distraction perception process remove structural intrinsic distractions then the structural perception process removes inter-structural distractions. In the global perception process, the DHL searches for the coarse structural region of the vascular structures on the slice of IVUS sequence. In the distraction perception process, the DHL progressively refines the coarse structural region of the vascular structures to remove structural distractions. In the structural perception process, the DHL detects regions of inter-structural distractions in fused structure features then separates them. Extensive experiments on 361 subjects show that the DHL is effective (e.g., the average Dice is greater than 0.95), and superior to ten state-of-the-art IVUS vascular structure segmentation methods.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102381"},"PeriodicalIF":5.7,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140618092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoguang Li , Yichao Zhou , Hongxia Yin , Pengfei Zhao , Ruowei Tang , Han Lv , Yating Qin , Li Zhuo , Zhenchang Wang
{"title":"Sub-features orthogonal decoupling: Detecting bone wall absence via a small number of abnormal examples for temporal CT images","authors":"Xiaoguang Li , Yichao Zhou , Hongxia Yin , Pengfei Zhao , Ruowei Tang , Han Lv , Yating Qin , Li Zhuo , Zhenchang Wang","doi":"10.1016/j.compmedimag.2024.102380","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102380","url":null,"abstract":"<div><p>The absence of bone wall located in the jugular bulb and sigmoid sinus of the temporal bone is one of the important reasons for pulsatile tinnitus. Automatic and accurate detection of these abnormal singes in CT slices has important theoretical significance and clinical value. Due to the shortage of abnormal samples, imbalanced samples, small inter-class differences, and low interpretability, existing deep-learning methods are greatly challenged. In this paper, we proposed a sub-features orthogonal decoupling model, which can effectively disentangle the representation features into class-specific sub-features and class-independent sub-features in a latent space. The former contains the discriminative information, while, the latter preserves information for image reconstruction. In addition, the proposed method can generate image samples using category conversion by combining the different class-specific sub-features and the class-independent sub-features, achieving corresponding mapping between deep features and images of specific classes. The proposed model improves the interpretability of the deep model and provides image synthesis methods for downstream tasks. The effectiveness of the method was verified in the detection of bone wall absence in the temporal bone jugular bulb and sigmoid sinus.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102380"},"PeriodicalIF":5.7,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140552526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantically redundant training data removal and deep model classification performance: A study with chest X-rays","authors":"Sivaramakrishnan Rajaraman, Ghada Zamzmi , Feng Yang , Zhaohui Liang, Zhiyun Xue, Sameer Antani","doi":"10.1016/j.compmedimag.2024.102379","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102379","url":null,"abstract":"<div><p>Deep learning (DL) has demonstrated its innate capacity to independently learn hierarchical features from complex and multi-dimensional data. A common understanding is that its performance scales up with the amount of training data. However, the data must also exhibit variety to enable improved learning. In medical imaging data, semantic redundancy, which is the presence of similar or repetitive information, can occur due to the presence of multiple images that have highly similar presentations for the disease of interest. Also, the common use of augmentation methods to generate variety in DL training could limit performance when indiscriminately applied to such data. We hypothesize that semantic redundancy would therefore tend to lower performance and limit generalizability to unseen data and question its impact on classifier performance even with large data. We propose an entropy-based sample scoring approach to identify and remove semantically redundant training data and demonstrate using the publicly available NIH chest X-ray dataset that the model trained on the resulting informative subset of training data significantly outperforms the model trained on the full training set, during both internal (recall: 0.7164 vs 0.6597, p<0.05) and external testing (recall: 0.3185 vs 0.2589, p<0.05). Our findings emphasize the importance of information-oriented training sample selection as opposed to the conventional practice of using all available training data.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102379"},"PeriodicalIF":5.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000569/pdfft?md5=6892a4c80999a323e6edf07480aef597&pid=1-s2.0-S0895611124000569-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140545845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chia-Feng Juang , Ya-Wen Chuang , Guan-Wen Lin , I-Fang Chung , Ying-Chih Lo
{"title":"Deep learning-based glomerulus detection and classification with generative morphology augmentation in renal pathology images","authors":"Chia-Feng Juang , Ya-Wen Chuang , Guan-Wen Lin , I-Fang Chung , Ying-Chih Lo","doi":"10.1016/j.compmedimag.2024.102375","DOIUrl":"10.1016/j.compmedimag.2024.102375","url":null,"abstract":"<div><p>Glomerulus morphology on renal pathology images provides valuable diagnosis and outcome prediction information. To provide better care, an efficient, standardized, and scalable method is urgently needed to optimize the time-consuming and labor-intensive interpretation process by renal pathologists. This paper proposes a deep convolutional neural network (CNN)-based approach to automatically detect and classify glomeruli with different stains in renal pathology images. In the glomerulus detection stage, this paper proposes a flattened Xception with a feature pyramid network (FX-FPN). The FX-FPN is employed as a backbone in the framework of faster region-based CNN to improve glomerulus detection performance. In the classification stage, this paper considers classifications of five glomerulus morphologies using a flattened Xception classifier. To endow the classifier with higher discriminability, this paper proposes a generative data augmentation approach for patch-based glomerulus morphology augmentation. New glomerulus patches of different morphologies are generated for data augmentation through the cycle-consistent generative adversarial network (CycleGAN). The single detection model shows the <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> score up to 0.9524 in H&E and PAS stains. The classification result shows that the average sensitivity and specificity are 0.7077 and 0.9316, respectively, by using the flattened Xception with the original training data. The sensitivity and specificity increase to 0.7623 and 0.9443, respectively, by using the generative data augmentation. Comparisons with different deep CNN models show the effectiveness and superiority of the proposed approach.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102375"},"PeriodicalIF":5.7,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140404349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenfeng Huang , Xiangyun Liao , Hao Chen , Ying Hu , Wenjing Jia , Qiong Wang
{"title":"Deep local-to-global feature learning for medical image super-resolution","authors":"Wenfeng Huang , Xiangyun Liao , Hao Chen , Ying Hu , Wenjing Jia , Qiong Wang","doi":"10.1016/j.compmedimag.2024.102374","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102374","url":null,"abstract":"<div><p>Medical images play a vital role in medical analysis by providing crucial information about patients’ pathological conditions. However, the quality of these images can be compromised by many factors, such as limited resolution of the instruments, artifacts caused by movements, and the complexity of the scanned areas. As a result, low-resolution (LR) images cannot provide sufficient information for diagnosis. To address this issue, researchers have attempted to apply image super-resolution (SR) techniques to restore the high-resolution (HR) images from their LR counterparts. However, these techniques are designed for generic images, and thus suffer from many challenges unique to medical images. An obvious one is the diversity of the scanned objects; for example, the organs, tissues, and vessels typically appear in different sizes and shapes, and are thus hard to restore with standard convolution neural networks (CNNs). In this paper, we develop a dynamic-local learning framework to capture the details of these diverse areas, consisting of deformable convolutions with adjustable kernel shapes. Moreover, the global information between the tissues and organs is vital for medical diagnosis. To preserve global information, we propose pixel–pixel and patch–patch global learning using a non-local mechanism and a vision transformer (ViT), respectively. The result is a novel CNN-ViT neural network with Local-to-Global feature learning for medical image SR, referred to as LGSR, which can accurately restore both local details and global information. We evaluate our method on six public datasets and one large-scale private dataset, which include five different types of medical images (<em>i.e.</em>, Ultrasound, OCT, Endoscope, CT, and MRI images). Experiments show that the proposed method achieves superior PSNR/SSIM and visual performance than the state of the arts with competitive computational costs, measured in network parameters, runtime, and FLOPs. What is more, the experiment conducted on OCT image segmentation for the downstream task demonstrates a significantly positive performance effect of LGSR.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102374"},"PeriodicalIF":5.7,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140332733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Judit Csore , Trisha L. Roy , Graham Wright , Christof Karmonik
{"title":"Unsupervised classification of multi-contrast magnetic resonance histology of peripheral arterial disease lesions using a convolutional variational autoencoder with a Gaussian mixture model in latent space: A technical feasibility study","authors":"Judit Csore , Trisha L. Roy , Graham Wright , Christof Karmonik","doi":"10.1016/j.compmedimag.2024.102372","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102372","url":null,"abstract":"<div><h3>Purpose</h3><p>To investigate the feasibility of a deep learning algorithm combining variational autoencoder (VAE) and two-dimensional (2D) convolutional neural networks (CNN) for automatically quantifying hard tissue presence and morphology in multi-contrast magnetic resonance (MR) images of peripheral arterial disease (PAD) occlusive lesions.</p></div><div><h3>Methods</h3><p>Multi-contrast MR images (T2-weighted and ultrashort echo time) were acquired from lesions harvested from six amputated legs with high isotropic spatial resolution (0.078 mm and 0.156 mm, respectively) at 9.4 T. A total of 4014 pseudo-color combined images were generated, with 75% used to train a VAE employing custom 2D CNN layers. A Gaussian mixture model (GMM) was employed to classify the latent space data into four tissue classes: I) concentric calcified (c), II) eccentric calcified (e), III) occluded with hard tissue (h) and IV) occluded with soft tissue (s). Test image probabilities, encoded by the trained VAE were used to evaluate model performance.</p></div><div><h3>Results</h3><p>GMM component classification probabilities ranged from 0.92 to 0.97 for class (c), 1.00 for class (e), 0.82–0.95 for class (h) and 0.56–0.93 for the remaining class (s). Due to the complexity of soft-tissue lesions reflected in the heterogeneity of the pseudo-color images, more GMM components (n=17) were attributed to class (s), compared to the other three (c, e and h) (n=6).</p></div><div><h3>Conclusion</h3><p>Combination of 2D CNN VAE and GMM achieves high classification probabilities for hard tissue-containing lesions. Automatic recognition of these classes may aid therapeutic decision-making and identifying uncrossable lesions prior to endovascular intervention.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102372"},"PeriodicalIF":5.7,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140347178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Gutierrez , Kimberly Amador , Anthony Winder , Matthias Wilms , Jens Fiehler , Nils D. Forkert
{"title":"Annotation-free prediction of treatment-specific tissue outcome from 4D CT perfusion imaging in acute ischemic stroke","authors":"Alejandro Gutierrez , Kimberly Amador , Anthony Winder , Matthias Wilms , Jens Fiehler , Nils D. Forkert","doi":"10.1016/j.compmedimag.2024.102376","DOIUrl":"10.1016/j.compmedimag.2024.102376","url":null,"abstract":"<div><p>Acute ischemic stroke is a critical health condition that requires timely intervention. Following admission, clinicians typically use perfusion imaging to facilitate treatment decision-making. While deep learning models leveraging perfusion data have demonstrated the ability to predict post-treatment tissue infarction for individual patients, predictions are often represented as binary or probabilistic masks that are not straightforward to interpret or easy to obtain. Moreover, these models typically rely on large amounts of subjectively segmented data and non-standard perfusion analysis techniques. To address these challenges, we propose a novel deep learning approach that directly predicts follow-up computed tomography images from full spatio-temporal 4D perfusion scans through a temporal compression. The results show that this method leads to realistic follow-up image predictions containing the infarcted tissue outcomes. The proposed compression method achieves comparable prediction results to using perfusion maps as inputs but without the need for perfusion analysis or arterial input function selection. Additionally, separate models trained on 45 patients treated with thrombolysis and 102 treated with thrombectomy showed that each model correctly captured the different patient-specific treatment effects as shown by image difference maps. The findings of this work clearly highlight the potential of our method to provide interpretable stroke treatment decision support without requiring manual annotations.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"114 ","pages":"Article 102376"},"PeriodicalIF":5.7,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140273444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel center-based deep contrastive metric learning method for the detection of polymicrogyria in pediatric brain MRI","authors":"Lingfeng Zhang , Nishard Abdeen , Jochen Lang","doi":"10.1016/j.compmedimag.2024.102373","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102373","url":null,"abstract":"<div><p>Polymicrogyria (PMG) is a disorder of cortical organization mainly seen in children, which can be associated with seizures, developmental delay and motor weakness. PMG is typically diagnosed on magnetic resonance imaging (MRI) but some cases can be challenging to detect even for experienced radiologists. In this study, we create an open pediatric MRI dataset (PPMR) containing both PMG and control cases from the Children’s Hospital of Eastern Ontario (CHEO), Ottawa, Canada. The differences between PMG and control MRIs are subtle and the true distribution of the features of the disease is unknown. This makes automatic detection of potential PMG cases in MRI difficult. To enable the automatic detection of potential PMG cases, we propose an anomaly detection method based on a novel center-based deep contrastive metric learning loss function (cDCM). Despite working with a small and imbalanced dataset our method achieves 88.07% recall at 71.86% precision. This will facilitate a computer-aided tool for radiologists to select potential PMG MRIs. To the best of our knowledge, our research is the first to apply machine learning techniques to identify PMG solely from MRI.</p><p>Our code is available at: <span>https://github.com/RichardChangCA/Deep-Contrastive-Metric-Learning-Method-to-Detect-Polymicrogyria-in-Pediatric-Brain-MRI</span><svg><path></path></svg>.</p><p>Our pediatric MRI dataset is available at: <span>https://www.kaggle.com/datasets/lingfengzhang/pediatric-polymicrogyria-mri-dataset</span><svg><path></path></svg>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"114 ","pages":"Article 102373"},"PeriodicalIF":5.7,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140190733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}