Dong Nie, Li Wang, Roger Trullo, Jianfu Li, Peng Yuan, James Xia, Dinggang Shen
{"title":"Segmentation of Craniomaxillofacial Bony Structures from MRI with a 3D Deep-Learning Based Cascade Framework.","authors":"Dong Nie, Li Wang, Roger Trullo, Jianfu Li, Peng Yuan, James Xia, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_31","DOIUrl":"10.1007/978-3-319-67389-9_31","url":null,"abstract":"<p><p>Computed tomography (CT) is commonly used as a diagnostic and treatment planning imaging modality in craniomaxillofacial (CMF) surgery to correct patient's bony defects. A major disadvantage of CT is that it emits harmful ionizing radiation to patients during the exam. Magnetic resonance imaging (MRI) is considered to be much safer and noninvasive, and often used to study CMF soft tissues (e.g., temporomandibular joint and brain). However, it is extremely difficult to accurately segment CMF bony structures from MRI since both bone and air appear to be black in MRI, along with low signal-to-noise ratio and partial volume effect. To this end, we proposed a 3D deep-learning based cascade framework to solve these issues. Specifically, a 3D fully convolutional network (FCN) architecture is first adopted to coarsely segment the bony structures. As the coarsely segmented bony structures by FCN tend to be thicker, convolutional neural network (CNN) is further utilized for fine-grained segmentation. To enhance the discriminative ability of the CNN, we particularly concatenate the predicted probability maps from FCN and the original MRI, and feed them together into the CNN to provide more context information for segmentation. Experimental results demonstrate a good performance and also the clinical feasibility of our proposed 3D deep-learning based cascade framework.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"266-273"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5798482/pdf/nihms915076.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35807448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Li, Hao Yang, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen
{"title":"Novel Effective Connectivity Network Inference for MCI Identification.","authors":"Yang Li, Hao Yang, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_37","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_37","url":null,"abstract":"<p><p>Inferring effective brain connectivity network is a challenging task owing to perplexing noise effects, the curse of dimensionality, and inter-subject variability. However, most existing network inference methods are based on correlation analysis and consider the datum points individually, revealing limited information of the neuron interactions and ignoring the relations amongst the derivatives of the data. Hence, we proposed a novel ultra group-constrained sparse linear regression model for effective connectivity inference. This model utilizes not only the discrepancy between observed signals and the model prediction, but also the discrepancy between the associated weak derivatives of the observed and the model signals for a more accurate effective connectivity inference. What's more, a group constraint is applied to minimize the inter-subject variability and the proposed modeling was validated on a mild cognitive impairment dataset with superior results achieved.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"2017 ","pages":"316-324"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_37","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36230107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sparse Multi-view Task-Centralized Learning for ASD Diagnosis.","authors":"Jun Wang, Qian Wang, Shitong Wang, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_19","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_19","url":null,"abstract":"<p><p>It is challenging to derive early diagnosis from neuroimaging data for autism spectrum disorder (ASD). In this work, we propose a novel sparse multi-view task-centralized (Sparse-MVTC) classification method for computer-assisted diagnosis of ASD. In particular, since ASD is known to be age- and sex-related, we partition all subjects into different groups of age/sex, each of which can be treated as a classification task to learn. Meanwhile, we extract multi-view features from functional magnetic resonance imaging to describe the brain connectivity of each subject. This formulates a multi-view multi-task sparse learning problem and it is solved by a novel Sparse-MVTC method. Specifically, we treat each task as a central task and other tasks as the auxiliary ones. We then consider the task-task and view-view relations between the central task and each auxiliary task. We can use this task-centralized strategy for a highly efficient solution. The comprehensive experiments on the ABIDE database demonstrate that our proposed Sparse-MVTC method can significantly outperform the existing classification methods in ASD diagnosis.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"159-167"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_19","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35842724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Polina Binder, Nematollah K Batmanghelich, Raul San Jose Estepar, Polina Golland
{"title":"Unsupervised Discovery of Emphysema Subtypes in a Large Clinical Cohort.","authors":"Polina Binder, Nematollah K Batmanghelich, Raul San Jose Estepar, Polina Golland","doi":"10.1007/978-3-319-47157-0_22","DOIUrl":"10.1007/978-3-319-47157-0_22","url":null,"abstract":"<p><p>Emphysema is one of the hallmarks of Chronic Obstructive Pulmonary Disorder (COPD), a devastating lung disease often caused by smoking. Emphysema appears on Computed Tomography (CT) scans as a variety of textures that correlate with disease subtypes. It has been shown that the disease subtypes and textures are linked to physiological indicators and prognosis, although neither is well characterized clinically. Most previous computational approaches to modeling emphysema imaging data have focused on supervised classification of lung textures in patches of CT scans. In this work, we describe a generative model that jointly captures heterogeneity of disease subtypes and of the patient population. We also describe a corresponding inference algorithm that simultaneously discovers disease subtypes and population structure in an unsupervised manner. This approach enables us to create image-based descriptors of emphysema beyond those that can be identified through manual labeling of currently defined phenotypes. By applying the resulting algorithm to a large data set, we identify groups of patients and disease subtypes that correlate with distinct physiological indicators.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"180-187"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5317320/pdf/nihms837319.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34760103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying High Order Brain Connectome Biomarkers via Learning on Hypergraph.","authors":"Chen Zu, Yue Gao, Brent Munsell, Minjeong Kim, Ziwen Peng, Yingying Zhu, Wei Gao, Daoqiang Zhang, Dinggang Shen, Guorong Wu","doi":"10.1007/978-3-319-47157-0_1","DOIUrl":"https://doi.org/10.1007/978-3-319-47157-0_1","url":null,"abstract":"<p><p>The functional connectome has gained increased attention in the neuroscience community. In general, most network connectivity models are based on correlations between discrete-time series signals that only connect two different brain regions. However, these bivariate region-to-region models do not involve three or more brain regions that form a subnetwork. Here we propose a learning-based method to explore subnetwork biomarkers that are significantly distinguishable between two clinical cohorts. Learning on hypergraph is employed in our work. Specifically, we construct a hypergraph by exhaustively inspecting all possible subnetworks for all subjects, where each hyperedge connects a group of subjects demonstrating highly correlated functional connectivity behavior throughout the underlying subnetwork. The objective function of hypergraph learning is to jointly optimize the weights for all hyperedges which make the separation of two groups by the learned data representation be in the best consensus with the observed clinical labels. We deploy our method to find high order childhood autism biomarkers from rs-fMRI images. Promising results have been obtained from comprehensive evaluation on the discriminative power and generality in diagnosis of Autism.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-47157-0_1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36253612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minjeong Kim, Guorong Wu, Isrem Rekik, Dinggang Shen
{"title":"Dual-Layer Groupwise Registration for Consistent Labeling of Longitudinal Brain Images.","authors":"Minjeong Kim, Guorong Wu, Isrem Rekik, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_9","DOIUrl":"https://doi.org/10.1007/978-3-319-47157-0_9","url":null,"abstract":"<p><p>The growing collection of longitudinal images for brain disease diagnosis necessitates the development of advanced longitudinal registration and anatomical labeling methods that can respect temporal consistency between images. However, the characteristics of such longitudinal images and how they lodge into the image manifold are often neglected in existing labeling methods. Indeed, most of them independently align atlases to each target time-point image for propagating the pre-defined atlas labels to the subject domain. In this paper, we present a <i>dual</i>-<i>layer groupwise registration method</i> to consistently label anatomical regions of interest in brain images across different time-points using a multi-atlases-based labeling framework. Our framework can best enhance the labeling of longitudinal images through: <b>(1)</b> using the group mean of the longitudinal images of each subject (i.e., subject-mean) as a bridge between atlases and the longitudinal subject scans to align atlases to all time-point images jointly; and <b>(2)</b> using inter-atlas relationship in their nesting manifold to better register each atlas image to the subject-mean. These steps yield to a more consistent (from the joint alignment of atlases with all time-point images) and more accurate (from the manifold-guided registration between each atlases and the subject-mean image) registration, thereby eventually improving the consistency and accuracy for the subsequent labeling step. We have tested our dual-layer groupwise registration method to label two challenging longitudinal brain datasets (i.e., healthy infants and Alzheimer's disease subjects). Our experimental results have showed that our method achieves higher labeling accuracy while keeping the labeling consistency over time, when compared to the traditional registration scheme (without our proposed contributions). Moreover, the proposed framework can flexibly integrate with the existing label fusion methods, such as sparse-patch based methods, to improve the labeling accuracy of longitudinal datasets.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"69-76"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-47157-0_9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35080512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Zhang, Yaozong Gao, Sang Hyun Park, Xiaopeng Zong, Weili Lin, Dinggang Shen
{"title":"Segmentation of Perivascular Spaces Using Vascular Features and Structured Random Forest from 7T MR Image.","authors":"Jun Zhang, Yaozong Gao, Sang Hyun Park, Xiaopeng Zong, Weili Lin, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_8","DOIUrl":"https://doi.org/10.1007/978-3-319-47157-0_8","url":null,"abstract":"<p><p>Quantitative analysis of perivascular spaces (PVSs) is important to reveal the correlations between cerebrovascular lesions and neurodegenerative diseases. In this study, we propose a learning-based segmentation framework to extract the PVSs from high-resolution 7T MR images. Specifically, we integrate three types of vascular filter responses into a structured random forest for classifying voxels into PVS and background. In addition, we also propose a novel entropy-based sampling strategy to extract informative samples in the background for training the classification model. Since various vascular features can be extracted by the three vascular filters, even thin and low-contrast structures can be effectively extracted from the noisy background. Moreover, continuous and smooth segmentation results can be obtained by utilizing the patch-based structured labels. The segmentation performance is evaluated on 19 subjects with 7T MR images, and the experimental results demonstrate that the joint use of entropy-based sampling strategy, vascular features and structured learning improves the segmentation accuracy, with the Dice similarity coefficient reaching 66 %.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"61-68"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-47157-0_8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35080511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengwang Wu, Sang Hyun Park, Yanrong Guo, Yaozong Gao, Dinggang Shen
{"title":"Regression Guided Deformable Models for Segmentation of Multiple Brain ROIs.","authors":"Zhengwang Wu, Sang Hyun Park, Yanrong Guo, Yaozong Gao, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_29","DOIUrl":"https://doi.org/10.1007/978-3-319-47157-0_29","url":null,"abstract":"<p><p>This paper proposes a novel method of using regression-guided deformable models for brain regions of interest (ROIs) segmentation. Different from conventional deformable segmentation, which often deforms shape model locally and thus sensitive to initialization, we propose to learn a regressor to explicitly guide the shape deformation, thus eventually improves the performance of ROI segmentation. The regressor is learned via two steps, (1) a joint classification and regression random forest (CRRF) and (2) an auto-context model. The CRRF predicts each voxel's deformation to the nearest point on the ROI boundary as well as each voxel's class label (e.g., ROI <i>versus</i> background). The auto-context model further refines all voxel's deformations (i.e., deformation field) and class labels (i.e., label maps) by considering the neighboring structures. Compared to the conventional random forest regressor, the proposed regressor provides more accurate deformation field estimation and thus more robust in guiding deformation of the shape model. Validated in segmentation of 14 midbrain ROIs from the IXI dataset, our method outperforms the state-of-art multi-atlas label fusion and classification methods, and also significantly reduces the computation cost.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"237-245"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-47157-0_29","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35080514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Hippocampal Subfield Segmentation from 3T Multi-modality Images.","authors":"Zhengwang Wu, Yaozong Gao, Feng Shi, Valerie Jewells, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_28","DOIUrl":"10.1007/978-3-319-47157-0_28","url":null,"abstract":"<p><p>Hippocampal subfields play important and divergent roles in both memory formation and early diagnosis of many neurological diseases, but automatic subfield segmentation is less explored due to its small size and poor image contrast. In this paper, we propose an automatic learning-based hippocampal subfields segmentation framework using multi-modality 3TMR images, including T1 MRI and resting-state fMRI (rs-fMRI). To do this, we first acquire both 3T and 7T T1 MRIs for each training subject, and then the 7T T1 MRI are linearly registered onto the 3T T1 MRI. Six hippocampal subfields are manually labeled on the aligned 7T T1 MRI, which has the 7T image contrast but sits in the 3T T1 space. Next, corresponding appearance and relationship features from both 3T T1 MRI and rs-fMRI are extracted to train a structured random forest as a multi-label classifier to conduct the segmentation. Finally, the subfield segmentation is further refined iteratively by additional context features and updated relationship features. To our knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using 3T routine T1 MRI and rs-fMRI. The quantitative comparison between our results and manual ground truth demonstrates the effectiveness of our method. Besides, we also find that (a) multi-modality features significantly improved subfield segmentation performance due to the complementary information among modalities; (b) automatic segmentation results using 3T multimodality images are partially comparable to those on 7T T1 MRI.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"229-236"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5464731/pdf/nihms833106.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35080513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renping Yu, Minghui Deng, P. Yap, Zhihui Wei, Li Wang, D. Shen
{"title":"Learning-Based 3T Brain MRI Segmentation with Guidance from 7T MRI Labeling.","authors":"Renping Yu, Minghui Deng, P. Yap, Zhihui Wei, Li Wang, D. Shen","doi":"10.1007/978-3-319-47157-0_26","DOIUrl":"https://doi.org/10.1007/978-3-319-47157-0_26","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"176 1","pages":"213-220"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77651903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}