A. Firjani, A. Elnakib, F. Khalifa, A. El-Baz, G. Gimel'farb, M. El-Ghar, Adel Said Elmaghraby
{"title":"A novel 3D segmentation approach for segmenting the prostate from dynamic contrast enhanced MRI using current appearance and learned shape prior","authors":"A. Firjani, A. Elnakib, F. Khalifa, A. El-Baz, G. Gimel'farb, M. El-Ghar, Adel Said Elmaghraby","doi":"10.1109/ISSPIT.2010.5711751","DOIUrl":null,"url":null,"abstract":"Prostate segmentation is an essential step in developing any non-invasive Computer-Assisted Diagnostic (CAD) system for the early diagnosis of prostate cancer using Magnetic Resonance Images (MRI). In this paper, we propose, a novel framework for 3D segmentation of the prostate region from Dynamic Contrast Enhancement Magnetic Resonance Images (DCE-MRI). The framework is based on a maximum aposteriori (MAP) estimate of a new log-likelihood function that consists of three descriptors: (i) 1st-order visual appearance descriptors of the DCE-MRI, (ii) a 3D spatially invariant 2nd-order homogeneity descriptor, and (iii) a 3D prostate shape descriptor. The shape prior is learned from the co-aligned 3D segmented prostate DCE-MRI data. The visual appearances of the object and its background are described with marginal gray-level distributions obtained by separating their mixture over prostate volume. The spatial interactions between the prostate voxels are modeled by a 3D 2nd-order rotation-variant Markov-Gibbs random field (MGRF) of object/background labels with analytically estimated potentials. Experiments with real in vivo prostate DCE-MRI confirm the robustness and accuracy of the proposed approach.","PeriodicalId":308189,"journal":{"name":"The 10th IEEE International Symposium on Signal Processing and Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 10th IEEE International Symposium on Signal Processing and Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSPIT.2010.5711751","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15
Abstract
Prostate segmentation is an essential step in developing any non-invasive Computer-Assisted Diagnostic (CAD) system for the early diagnosis of prostate cancer using Magnetic Resonance Images (MRI). In this paper, we propose, a novel framework for 3D segmentation of the prostate region from Dynamic Contrast Enhancement Magnetic Resonance Images (DCE-MRI). The framework is based on a maximum aposteriori (MAP) estimate of a new log-likelihood function that consists of three descriptors: (i) 1st-order visual appearance descriptors of the DCE-MRI, (ii) a 3D spatially invariant 2nd-order homogeneity descriptor, and (iii) a 3D prostate shape descriptor. The shape prior is learned from the co-aligned 3D segmented prostate DCE-MRI data. The visual appearances of the object and its background are described with marginal gray-level distributions obtained by separating their mixture over prostate volume. The spatial interactions between the prostate voxels are modeled by a 3D 2nd-order rotation-variant Markov-Gibbs random field (MGRF) of object/background labels with analytically estimated potentials. Experiments with real in vivo prostate DCE-MRI confirm the robustness and accuracy of the proposed approach.