Vasileios Magoulianitis , Jiaxin Yang , Yijing Yang , Jintang Xue , Masatomo Kaneko , Giovanni Cacciamani , Andre Abreu , Vinay Duddalwar , C.-C. Jay Kuo , Inderbir S. Gill , Chrysostomos Nikias
{"title":"PCa-RadHop: A transparent and lightweight feed-forward method for clinically significant prostate cancer segmentation","authors":"Vasileios Magoulianitis , Jiaxin Yang , Yijing Yang , Jintang Xue , Masatomo Kaneko , Giovanni Cacciamani , Andre Abreu , Vinay Duddalwar , C.-C. Jay Kuo , Inderbir S. Gill , Chrysostomos Nikias","doi":"10.1016/j.compmedimag.2024.102408","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102408","url":null,"abstract":"<div><p>Prostate Cancer is one of the most frequently occurring cancers in men, with a low survival rate if not early diagnosed. PI-RADS reading has a high false positive rate, thus increasing the diagnostic incurred costs and patient discomfort. Deep learning (DL) models achieve a high segmentation performance, although require a large model size and complexity. Also, DL models lack of feature interpretability and are perceived as “black-boxes” in the medical field. PCa-RadHop pipeline is proposed in this work, aiming to provide a more transparent feature extraction process using a linear model. It adopts the recently introduced Green Learning (GL) paradigm, which offers a small model size and low complexity. PCa-RadHop consists of two stages: Stage-1 extracts data-driven radiomics features from the bi-parametric Magnetic Resonance Imaging (bp-MRI) input and predicts an initial heatmap. To reduce the false positive rate, a subsequent stage-2 is introduced to refine the predictions by including more contextual information and radiomics features from each already detected Region of Interest (ROI). Experiments on the largest publicly available dataset, PI-CAI, show a competitive performance standing of the proposed method among other deep DL models, achieving an area under the curve (AUC) of 0.807 among a cohort of 1,000 patients. Moreover, PCa-RadHop maintains orders of magnitude smaller model size and complexity.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102408"},"PeriodicalIF":5.4,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141438459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoming Jiang , Yongxin Yang , Tong Su , Kai Xiao , LiDan Lu , Wei Wang , Changsong Guo , Lizhi Shao , Mingjing Wang , Dong Jiang
{"title":"Unsupervised domain adaptation based on feature and edge alignment for femur X-ray image segmentation","authors":"Xiaoming Jiang , Yongxin Yang , Tong Su , Kai Xiao , LiDan Lu , Wei Wang , Changsong Guo , Lizhi Shao , Mingjing Wang , Dong Jiang","doi":"10.1016/j.compmedimag.2024.102407","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102407","url":null,"abstract":"<div><p>The gold standard for diagnosing osteoporosis is bone mineral density (BMD) measurement by dual-energy X-ray absorptiometry (DXA). However, various factors during the imaging process cause domain shifts in DXA images, which lead to incorrect bone segmentation. Research shows that poor bone segmentation is one of the prime reasons of inaccurate BMD measurement, severely affecting the diagnosis and treatment plans for osteoporosis. In this paper, we propose a Multi-feature Joint Discriminative Domain Adaptation (MDDA) framework to improve segmentation performance and the generalization of the network in domain-shifted images. The proposed method learns domain-invariant features between the source and target domains from the perspectives of multi-scale features and edges, and is evaluated on real data from multi-center datasets. Compared to other state-of-the-art methods, the feature prior from the source domain and edge prior enable the proposed MDDA to achieve the optimal domain adaptation performance and generalization. It also demonstrates superior performance in domain adaptation tasks on small amount datasets, even using only 5 or 10 images. In this study, MDDA provides an accurate bone segmentation tool for BMD measurement based on DXA imaging.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102407"},"PeriodicalIF":5.7,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141328854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phillip Chlap , Hang Min , Jason Dowling , Matthew Field , Kirrily Cloak , Trevor Leong , Mark Lee , Julie Chu , Jennifer Tan , Phillip Tran , Tomas Kron , Mark Sidhom , Kirsty Wiltshire , Sarah Keats , Andrew Kneebone , Annette Haworth , Martin A. Ebert , Shalini K. Vinod , Lois Holloway
{"title":"Uncertainty estimation using a 3D probabilistic U-Net for segmentation with small radiotherapy clinical trial datasets","authors":"Phillip Chlap , Hang Min , Jason Dowling , Matthew Field , Kirrily Cloak , Trevor Leong , Mark Lee , Julie Chu , Jennifer Tan , Phillip Tran , Tomas Kron , Mark Sidhom , Kirsty Wiltshire , Sarah Keats , Andrew Kneebone , Annette Haworth , Martin A. Ebert , Shalini K. Vinod , Lois Holloway","doi":"10.1016/j.compmedimag.2024.102403","DOIUrl":"10.1016/j.compmedimag.2024.102403","url":null,"abstract":"<div><h3>Background and objectives</h3><p>Bio-medical image segmentation models typically attempt to predict one segmentation that resembles a ground-truth structure as closely as possible. However, as medical images are not perfect representations of anatomy, obtaining this ground truth is not possible. A surrogate commonly used is to have multiple expert observers define the same structure for a dataset. When multiple observers define the same structure on the same image there can be significant differences depending on the structure, image quality/modality and the region being defined. It is often desirable to estimate this type of aleatoric uncertainty in a segmentation model to help understand the region in which the true structure is likely to be positioned. Furthermore, obtaining these datasets is resource intensive so training such models using limited data may be required. With a small dataset size, differing patient anatomy is likely not well represented causing epistemic uncertainty which should also be estimated so it can be determined for which cases the model is effective or not.</p></div><div><h3>Methods</h3><p>We use a 3D probabilistic U-Net to train a model from which several segmentations can be sampled to estimate the range of uncertainty seen between multiple observers. To ensure that regions where observers disagree most are emphasised in model training, we expand the Generalised Evidence Lower Bound (ELBO) with a Constrained Optimisation (GECO) loss function with an additional contour loss term to give attention to this region. Ensemble and Monte-Carlo dropout (MCDO) uncertainty quantification methods are used during inference to estimate model confidence on an unseen case. We apply our methodology to two radiotherapy clinical trial datasets, a gastric cancer trial (TOPGEAR, TROG 08.08) and a post-prostatectomy prostate cancer trial (RAVES, TROG 08.03). Each dataset contains only 10 cases each for model development to segment the clinical target volume (CTV) which was defined by multiple observers on each case. An additional 50 cases are available as a hold-out dataset for each trial which had only one observer define the CTV structure on each case. Up to 50 samples were generated using the probabilistic model for each case in the hold-out dataset. To assess performance, each manually defined structure was matched to the closest matching sampled segmentation based on commonly used metrics.</p></div><div><h3>Results</h3><p>The TOPGEAR CTV model achieved a Dice Similarity Coefficient (DSC) and Surface DSC (sDSC) of 0.7 and 0.43 respectively with the RAVES model achieving 0.75 and 0.71 respectively. Segmentation quality across cases in the hold-out datasets was variable however both the ensemble and MCDO uncertainty estimation approaches were able to accurately estimate model confidence with a p-value < 0.001 for both TOPGEAR and RAVES when comparing the DSC using the Pearson correlation coefficient.</p></div><div><h3>Conclu","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102403"},"PeriodicalIF":5.7,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000806/pdfft?md5=868a3bb84995d28d5305b07d9e1c8a21&pid=1-s2.0-S0895611124000806-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141277277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PFMNet: Prototype-based feature mapping network for few-shot domain adaptation in medical image segmentation","authors":"Runze Wang, Guoyan Zheng","doi":"10.1016/j.compmedimag.2024.102406","DOIUrl":"10.1016/j.compmedimag.2024.102406","url":null,"abstract":"<div><p>Lack of data is one of the biggest hurdles for rare disease research using deep learning. Due to the lack of rare-disease images and annotations, training a robust network for automatic rare-disease image segmentation is very challenging. To address this challenge, few-shot domain adaptation (FSDA) has emerged as a practical research direction, aiming to leverage a limited number of annotated images from a target domain to facilitate adaptation of models trained on other large datasets in a source domain. In this paper, we present a novel prototype-based feature mapping network (PFMNet) designed for FSDA in medical image segmentation. PFMNet adopts an encoder–decoder structure for segmentation, with the prototype-based feature mapping (PFM) module positioned at the bottom of the encoder–decoder structure. The PFM module transforms high-level features from the target domain into the source domain-like features that are more easily comprehensible by the decoder. By leveraging these source domain-like features, the decoder can effectively perform few-shot segmentation in the target domain and generate accurate segmentation masks. We evaluate the performance of PFMNet through experiments on three typical yet challenging few-shot medical image segmentation tasks: cross-center optic disc/cup segmentation, cross-center polyp segmentation, and cross-modality cardiac structure segmentation. We consider four different settings: 5-shot, 10-shot, 15-shot, and 20-shot. The experimental results substantiate the efficacy of our proposed approach for few-shot domain adaptation in medical image segmentation.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102406"},"PeriodicalIF":5.7,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angelo Lasala , Maria Chiara Fiorentino , Andrea Bandini , Sara Moccia
{"title":"FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis","authors":"Angelo Lasala , Maria Chiara Fiorentino , Andrea Bandini , Sara Moccia","doi":"10.1016/j.compmedimag.2024.102405","DOIUrl":"10.1016/j.compmedimag.2024.102405","url":null,"abstract":"<div><p>Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102405"},"PeriodicalIF":5.7,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Zhong , Ya Wang , Xiaotong Xu , Xueyang Wu , Shujun Liang , Zhenyuan Ning , Li Wang , Yuyu Niu , Gang Li , Yu Zhang
{"title":"A brain subcortical segmentation tool based on anatomy attentional fusion network for developing macaques","authors":"Tao Zhong , Ya Wang , Xiaotong Xu , Xueyang Wu , Shujun Liang , Zhenyuan Ning , Li Wang , Yuyu Niu , Gang Li , Yu Zhang","doi":"10.1016/j.compmedimag.2024.102404","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102404","url":null,"abstract":"<div><p>Magnetic Resonance Imaging (MRI) plays a pivotal role in the accurate measurement of brain subcortical structures in macaques, which is crucial for unraveling the complexities of brain structure and function, thereby enhancing our understanding of neurodegenerative diseases and brain development. However, due to significant differences in brain size, structure, and imaging characteristics between humans and macaques, computational tools developed for human neuroimaging studies often encounter obstacles when applied to macaques. In this context, we propose an Anatomy Attentional Fusion Network (AAF-Net), which integrates multimodal MRI data with anatomical constraints in a multi-scale framework to address the challenges posed by the dynamic development, regional heterogeneity, and age-related size variations of the juvenile macaque brain, thus achieving precise subcortical segmentation. Specifically, we generate a Signed Distance Map (SDM) based on the initial rough segmentation of the subcortical region by a network as an anatomical constraint, providing comprehensive information on positions, structures, and morphology. Then we construct AAF-Net to fully fuse the SDM anatomical constraints and multimodal images for refined segmentation. To thoroughly evaluate the performance of our proposed tool, over 700 macaque MRIs from 19 datasets were used in this study. Specifically, we employed two manually labeled longitudinal macaque datasets to develop the tool and complete four-fold cross-validations. Furthermore, we incorporated various external datasets to demonstrate the proposed tool’s generalization capabilities and promise in brain development research. We have made this tool available as an open-source resource at <span>https://github.com/TaoZhong11/Macaque_subcortical_segmentation</span><svg><path></path></svg> for direct application.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102404"},"PeriodicalIF":5.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Shahid Iqbal , Md Belal Bin Heyat , Saba Parveen , Mohd Ammar Bin Hayat , Mohamad Roshanzamir , Roohallah Alizadehsani , Faijan Akhtar , Eram Sayeed , Sadiq Hussain , Hany S. Hussein , Mohamad Sawan
{"title":"Progress and trends in neurological disorders research based on deep learning","authors":"Muhammad Shahid Iqbal , Md Belal Bin Heyat , Saba Parveen , Mohd Ammar Bin Hayat , Mohamad Roshanzamir , Roohallah Alizadehsani , Faijan Akhtar , Eram Sayeed , Sadiq Hussain , Hany S. Hussein , Mohamad Sawan","doi":"10.1016/j.compmedimag.2024.102400","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102400","url":null,"abstract":"<div><p>In recent years, deep learning (DL) has emerged as a powerful tool in clinical imaging, offering unprecedented opportunities for the diagnosis and treatment of neurological disorders (NDs). This comprehensive review explores the multifaceted role of DL techniques in leveraging vast datasets to advance our understanding of NDs and improve clinical outcomes. Beginning with a systematic literature review, we delve into the utilization of DL, particularly focusing on multimodal neuroimaging data analysis—a domain that has witnessed rapid progress and garnered significant scientific interest. Our study categorizes and critically analyses numerous DL models, including Convolutional Neural Networks (CNNs), LSTM-CNN, GAN, and VGG, to understand their performance across different types of Neurology Diseases. Through particular analysis, we identify key benchmarks and datasets utilized in training and testing DL models, shedding light on the challenges and opportunities in clinical neuroimaging research. Moreover, we discuss the effectiveness of DL in real-world clinical scenarios, emphasizing its potential to revolutionize ND diagnosis and therapy. By synthesizing existing literature and describing future directions, this review not only provides insights into the current state of DL applications in ND analysis but also covers the way for the development of more efficient and accessible DL techniques. Finally, our findings underscore the transformative impact of DL in reshaping the landscape of clinical neuroimaging, offering hope for enhanced patient care and groundbreaking discoveries in the field of neurology. This review paper is beneficial for neuropathologists and new researchers in this field.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102400"},"PeriodicalIF":5.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141289967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aurora Rofena , Valerio Guarrasi , Marina Sarli , Claudia Lucia Piccolo , Matteo Sammarra , Bruno Beomonte Zobel , Paolo Soda
{"title":"A deep learning approach for virtual contrast enhancement in Contrast Enhanced Spectral Mammography","authors":"Aurora Rofena , Valerio Guarrasi , Marina Sarli , Claudia Lucia Piccolo , Matteo Sammarra , Bruno Beomonte Zobel , Paolo Soda","doi":"10.1016/j.compmedimag.2024.102398","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102398","url":null,"abstract":"<div><p>Contrast Enhanced Spectral Mammography (CESM) is a dual-energy mammographic imaging technique that first requires intravenously administering an iodinated contrast medium. Then, it collects both a low-energy image, comparable to standard mammography, and a high-energy image. The two scans are combined to get a recombined image showing contrast enhancement. Despite CESM diagnostic advantages for breast cancer diagnosis, the use of contrast medium can cause side effects, and CESM also beams patients with a higher radiation dose compared to standard mammography. To address these limitations, this work proposes using deep generative models for virtual contrast enhancement on CESM, aiming to make CESM contrast-free and reduce the radiation dose. Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images. We perform an extensive quantitative and qualitative analysis of the model’s performance, also exploiting radiologists’ assessments, on a novel CESM dataset that includes 1138 images. As a further contribution to this work, we make the dataset publicly available. The results show that CycleGAN is the most promising deep network to generate synthetic recombined images, highlighting the potential of artificial intelligence techniques for virtual contrast enhancement in this field.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102398"},"PeriodicalIF":5.7,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000752/pdfft?md5=579b15387524c47940b3088af4489328&pid=1-s2.0-S0895611124000752-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141163469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bartosz Machura , Damian Kucharski , Oskar Bozek , Bartosz Eksner , Bartosz Kokoszka , Tomasz Pekala , Mateusz Radom , Marek Strzelczak , Lukasz Zarudzki , Benjamín Gutiérrez-Becker , Agata Krason , Jean Tessier , Jakub Nalepa
{"title":"Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies","authors":"Bartosz Machura , Damian Kucharski , Oskar Bozek , Bartosz Eksner , Bartosz Kokoszka , Tomasz Pekala , Mateusz Radom , Marek Strzelczak , Lukasz Zarudzki , Benjamín Gutiérrez-Becker , Agata Krason , Jean Tessier , Jakub Nalepa","doi":"10.1016/j.compmedimag.2024.102401","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102401","url":null,"abstract":"<div><p>Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and – importantly – it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102401"},"PeriodicalIF":5.7,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000788/pdfft?md5=51dfd8fc9e95917b8971fa4297d3ea4e&pid=1-s2.0-S0895611124000788-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141095699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jian Wang , Fan Yu , Mengze Zhang , Jie Lu , Zhen Qian
{"title":"A 3D framework for segmentation of carotid artery vessel wall and identification of plaque compositions in multi-sequence MR images","authors":"Jian Wang , Fan Yu , Mengze Zhang , Jie Lu , Zhen Qian","doi":"10.1016/j.compmedimag.2024.102402","DOIUrl":"10.1016/j.compmedimag.2024.102402","url":null,"abstract":"<div><p>Accurately assessing carotid artery wall thickening and identifying risky plaque components are critical for early diagnosis and risk management of carotid atherosclerosis. In this paper, we present a 3D framework for automated segmentation of the carotid artery vessel wall and identification of the compositions of carotid plaque in multi-sequence magnetic resonance (MR) images under the challenge of imperfect manual labeling. Manual labeling is commonly done in 2D slices of these multi-sequence MR images and often lacks perfect alignment across 2D slices and the multiple MR sequences, leading to labeling inaccuracies. To address such challenges, our framework is split into two parts: a segmentation subnetwork and a plaque component identification subnetwork. Initially, a 2D localization network pinpoints the carotid artery’s position, extracting the region of interest (ROI) from the input images. Following that, a signed-distance-map-enabled 3D U-net (Çiçek etal, 2016)an adaptation of the nnU-net (Ronneberger and Fischer, 2015) segments the carotid artery vessel wall. This method allows for the concurrent segmentation of the vessel wall area using the signed distance map (SDM) loss (Xue et al., 2020) which regularizes the segmentation surfaces in 3D and reduces erroneous segmentation caused by imperfect manual labels. Subsequently, the ROI of the input images and the obtained vessel wall masks are extracted and combined to obtain the identification results of plaque components in the identification subnetwork. Tailored data augmentation operations are introduced into the framework to reduce the false positive rate of calcification and hemorrhage identification. We trained and tested our proposed method on a dataset consisting of 115 patients, and it achieves an accurate segmentation result of carotid artery wall (0.8459 Dice), which is superior to the best result in published studies (0.7885 Dice). Our approach yielded accuracies of 0.82, 0.73 and 0.88 for the identification of calcification, lipid-rich core and hemorrhage components. Our proposed framework can be potentially used in clinical and research settings to help radiologists perform cumbersome reading tasks and evaluate the risk of carotid plaques.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102402"},"PeriodicalIF":5.7,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141135573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}