Machine learning in medical imaging. MLMI (Workshop)最新文献

筛选
英文 中文
Globally-Aware Multiple Instance Classifier for Breast Cancer Screening. 用于乳腺癌筛查的全局感知多实例分类器
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2019-10-01 Epub Date: 2019-10-10 DOI: 10.1007/978-3-030-32692-0_3
Yiqiu Shen, Nan Wu, Jason Phang, Jungkyu Park, Gene Kim, Linda Moy, Kyunghyun Cho, Krzysztof J Geras
{"title":"Globally-Aware Multiple Instance Classifier for Breast Cancer Screening.","authors":"Yiqiu Shen, Nan Wu, Jason Phang, Jungkyu Park, Gene Kim, Linda Moy, Kyunghyun Cho, Krzysztof J Geras","doi":"10.1007/978-3-030-32692-0_3","DOIUrl":"10.1007/978-3-030-32692-0_3","url":null,"abstract":"<p><p>Deep learning models designed for visual classification tasks on natural images have become prevalent in medical image analysis. However, medical images differ from typical natural images in many ways, such as significantly higher resolutions and smaller regions of interest. Moreover, both the global structure and local details play important roles in medical image analysis tasks. To address these unique properties of medical images, we propose a neural network that is able to classify breast cancer lesions utilizing information from both a global saliency map and multiple local patches. The proposed model outperforms the ResNet-based baseline and achieves radiologist-level performance in the interpretation of screening mammography. Although our model is trained only with image-level labels, it is able to generate pixel-level saliency maps that provide localization of possible malignant findings.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"18-26"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7060084/pdf/nihms-1551235.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37717279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Jointly Discriminative and Generative Recurrent Neural Networks for Learning from fMRI. 用于fMRI学习的联合判别和生成递归神经网络。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2019-10-01 Epub Date: 2019-10-10 DOI: 10.1007/978-3-030-32692-0_44
Nicha C Dvornek, Xiaoxiao Li, Juntang Zhuang, James S Duncan
{"title":"Jointly Discriminative and Generative Recurrent Neural Networks for Learning from fMRI.","authors":"Nicha C Dvornek,&nbsp;Xiaoxiao Li,&nbsp;Juntang Zhuang,&nbsp;James S Duncan","doi":"10.1007/978-3-030-32692-0_44","DOIUrl":"https://doi.org/10.1007/978-3-030-32692-0_44","url":null,"abstract":"<p><p>Recurrent neural networks (RNNs) were designed for dealing with time-series data and have recently been used for creating predictive models from functional magnetic resonance imaging (fMRI) data. However, gathering large fMRI datasets for learning is a difficult task. Furthermore, network interpretability is unclear. To address these issues, we utilize multitask learning and design a novel RNN-based model that learns to discriminate between classes while simultaneously learning to generate the fMRI time-series data. Employing the long short-term memory (LSTM) structure, we develop a discriminative model based on the hidden state and a generative model based on the cell state. The addition of the generative model constrains the network to learn functional communities represented by the LSTM nodes that are both consistent with the data generation as well as useful for the classification task. We apply our approach to the classification of subjects with autism vs. healthy controls using several datasets from the Autism Brain Imaging Data Exchange. Experiments show that our jointly discriminative and generative model improves classification learning while also producing robust and meaningful functional communities for better model understanding.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"382-390"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7143657/pdf/nihms-1567698.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37820368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Distanced LSTM: Time-Distanced Gates in Long Short-Term Memory Models for Lung Cancer Detection. 远距离LSTM:肺癌检测长短期记忆模型中的时间间隔门。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2019-10-01 Epub Date: 2019-10-10
Riqiang Gao, Yuankai Huo, Shunxing Bao, Yucheng Tang, Sanja L Antic, Emily S Epstein, Aneri B Balar, Steve Deppen, Alexis B Paulson, Kim L Sandler, Pierre P Massion, Bennett A Landman
{"title":"Distanced LSTM: Time-Distanced Gates in Long Short-Term Memory Models for Lung Cancer Detection.","authors":"Riqiang Gao,&nbsp;Yuankai Huo,&nbsp;Shunxing Bao,&nbsp;Yucheng Tang,&nbsp;Sanja L Antic,&nbsp;Emily S Epstein,&nbsp;Aneri B Balar,&nbsp;Steve Deppen,&nbsp;Alexis B Paulson,&nbsp;Kim L Sandler,&nbsp;Pierre P Massion,&nbsp;Bennett A Landman","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The field of lung nodule detection and cancer prediction has been rapidly developing with the support of large public data archives. Previous studies have largely focused cross-sectional (single) CT data. Herein, we consider longitudinal data. The Long Short-Term Memory (LSTM) model addresses learning with regularly spaced time points (i.e., equal temporal intervals). However, clinical imaging follows patient needs with often heterogeneous, irregular acquisitions. To model both regular and irregular longitudinal samples, we generalize the LSTM model with the Distanced LSTM (DLSTM) for temporally varied acquisitions. The DLSTM includes a Temporal Emphasis Model (TEM) that enables learning across regularly and irregularly sampled intervals. Briefly, (1) the temporal intervals between longitudinal scans are modeled explicitly, (2) temporally adjustable forget and input gates are introduced for irregular temporal sampling; and (3) the latest longitudinal scan has an additional emphasis term. We evaluate the DLSTM framework in three datasets including simulated data, 1794 National Lung Screening Trial (NLST) scans, and 1420 clinically acquired data with heterogeneous and irregular temporal accession. The experiments on the first two datasets demonstrate that our method achieves competitive performance on both simulated and regularly sampled datasets (e.g. improve LSTM from 0.6785 to 0.7085 on F1 score in NLST). In external validation of clinically and irregularly acquired data, the benchmarks achieved 0.8350 (CNN feature) and 0.8380 (LSTM) on area under the ROC curve (AUC) score, while the proposed DLSTM achieves 0.8905.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"310-318"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8148226/pdf/nihms-1062384.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39035902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Scale Attentional Network for Multi-Focal Segmentation of Active Bleed after Pelvic Fractures. 骨盆骨折后活动性出血多焦点分割的多尺度注意网络。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2019-10-01 DOI: 10.1007/978-3-030-32692-0_53
Yuyin Zhou, David Dreizin, Yingwei Li, Zhishuai Zhang, Yan Wang, Alan Yuille
{"title":"Multi-Scale Attentional Network for Multi-Focal Segmentation of Active Bleed after Pelvic Fractures.","authors":"Yuyin Zhou,&nbsp;David Dreizin,&nbsp;Yingwei Li,&nbsp;Zhishuai Zhang,&nbsp;Yan Wang,&nbsp;Alan Yuille","doi":"10.1007/978-3-030-32692-0_53","DOIUrl":"https://doi.org/10.1007/978-3-030-32692-0_53","url":null,"abstract":"<p><p>Trauma is the worldwide leading cause of death and disability in those younger than 45 years, and pelvic fractures are a major source of morbidity and mortality. Automated segmentation of multiple foci of arterial bleeding from ab-dominopelvic trauma CT could provide rapid objective measurements of the total extent of active bleeding, potentially augmenting outcome prediction at the point of care, while improving patient triage, allocation of appropriate resources, and time to definitive intervention. In spite of the importance of active bleeding in the quick tempo of trauma care, the task is still quite challenging due to the variable contrast, intensity, location, size, shape, and multiplicity of bleeding foci. Existing work presents a heuristic rule-based segmentation technique which requires multiple stages and cannot be efficiently optimized end-to-end. To this end, we present, Multi-Scale Attentional Network (MSAN), the first yet reliable end-to-end network, for automated segmentation of active hemorrhage from contrast-enhanced trauma CT scans. MSAN consists of the following components: 1) an encoder which fully integrates the global contextual information from holistic 2D slices; 2) a multi-scale strategy applied both in the training stage and the inference stage to handle the challenges induced by variation of target sizes; 3) an attentional module to further refine the deep features, leading to better segmentation quality; and 4) a multi-view mechanism to leverage the 3D information. MSAN reports a significant improvement of more than 7% compared to prior arts in terms of DSC.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11861 ","pages":"461-469"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10314367/pdf/nihms-1912145.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9806914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Machine Learning in Medical Imaging: 10th International Workshop, MLMI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13, 2019, Proceedings 医学成像中的机器学习:第十届国际研讨会,MLMI 2019,与MICCAI 2019一起举行,中国深圳,2019年10月13日,会议录
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2019-01-01 DOI: 10.1007/978-3-030-32692-0
{"title":"Machine Learning in Medical Imaging: 10th International Workshop, MLMI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13, 2019, Proceedings","authors":"","doi":"10.1007/978-3-030-32692-0","DOIUrl":"https://doi.org/10.1007/978-3-030-32692-0","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"112 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80341634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
End-To-End Alzheimer's Disease Diagnosis and Biomarker Identification. 端到端阿尔茨海默病诊断和生物标记物鉴定。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_39
Soheil Esmaeilzadeh, Dimitrios Ioannis Belivanis, Kilian M Pohl, Ehsan Adeli
{"title":"End-To-End Alzheimer's Disease Diagnosis and Biomarker Identification.","authors":"Soheil Esmaeilzadeh, Dimitrios Ioannis Belivanis, Kilian M Pohl, Ehsan Adeli","doi":"10.1007/978-3-030-00919-9_39","DOIUrl":"10.1007/978-3-030-00919-9_39","url":null,"abstract":"<p><p>As shown in computer vision, the power of deep learning lies in automatically learning relevant and powerful features for any perdition task, which is made possible through end-to-end architectures. However, deep learning approaches applied for classifying medical images do not adhere to this architecture as they rely on several pre- and post-processing steps. This shortcoming can be explained by the relatively small number of available labeled subjects, the high dimensionality of neuroimaging data, and difficulties in interpreting the results of deep learning methods. In this paper, we propose a simple 3D Convolutional Neural Networks and exploit its model parameters to tailor the end-to-end architecture for the diagnosis of Alzheimer's disease (AD). Our model can diagnose AD with an accuracy of 94.1% on the popular ADNI dataset using only MRI data, which outperforms the previous state-of-the-art. Based on the learned model, we identify the disease biomarkers, the results of which were in accordance with the literature. We further transfer the learned model to diagnose mild cognitive impairment (MCI), the prodromal stage of AD, which yield better results compared to other methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"337-345"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7440044/pdf/nihms-1617549.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38295532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing Novel Weighted Correlation Kernels for Convolutional Neural Networks to Extract Hierarchical Functional Connectivities from fMRI for Disease Diagnosis. 为卷积神经网络开发新的加权相关核,从 fMRI 提取分层功能连接性用于疾病诊断。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_1
Biao Jie, Mingxia Liu, Chunfeng Lian, Feng Shi, Dinggang Shen
{"title":"Developing Novel Weighted Correlation Kernels for Convolutional Neural Networks to Extract Hierarchical Functional Connectivities from fMRI for Disease Diagnosis.","authors":"Biao Jie, Mingxia Liu, Chunfeng Lian, Feng Shi, Dinggang Shen","doi":"10.1007/978-3-030-00919-9_1","DOIUrl":"10.1007/978-3-030-00919-9_1","url":null,"abstract":"<p><p>Functional magnetic resonance imaging (fMRI) has been widely applied to analysis and diagnosis of brain diseases, including Alzheimer's disease (AD) and its prodrome, <i>i.e.</i>, mild cognitive impairment (MCI). Traditional methods usually construct connectivity networks (CNs) by simply calculating Pearson correlation coefficients (PCCs) between time series of brain regions, and then extract low-level network measures as features to train the learning model. However, the valuable observation information in network construction (<i>e.g.</i>, specific contributions of different time points) and high-level (<i>i.e.</i>, high-order) network properties are neglected in these methods. In this paper, we first define a novel weighted correlation kernel (called wc-kernel) to measure the correlation of brain regions, by which weighting factors are determined in a data-driven manner to characterize the contribution of each time point, thus conveying the richer interaction information of brain regions compared with the PCC method. Furthermore, we propose a wc-kernel based convolutional neural network (CNN) (called wck-CNN) framework for extracting the hierarchical (<i>i.e.</i>, from low-order to high-order) functional connectivities for disease diagnosis, by using fMRI data. Specifically, we first define a layer to build dynamic CNs (DCNs) using the defined wc-kernels. Then, we define three layers to extract local (region specific), global (network specific) and temporal high-order properties from the constructed low-order functional connectivities as features for classification. Results on 174 subjects (a total of 563 scans) with rs-fMRI data from ADNI suggest that the our method can <i>not only</i> improve the performance compared with state-of-the-art methods, <i>but also</i> provide novel insights into the interaction patterns of brain activities and their changes in diseases.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6410567/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37215057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity. 基于模态相似性监督的深度学习模态间图像配准。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_7
Xiaohuan Cao, Jianhua Yang, Li Wang, Zhong Xue, Qian Wang, Dinggang Shen
{"title":"Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity.","authors":"Xiaohuan Cao,&nbsp;Jianhua Yang,&nbsp;Li Wang,&nbsp;Zhong Xue,&nbsp;Qian Wang,&nbsp;Dinggang Shen","doi":"10.1007/978-3-030-00919-9_7","DOIUrl":"https://doi.org/10.1007/978-3-030-00919-9_7","url":null,"abstract":"<p><p>Non-rigid inter-modality registration can facilitate accurate information fusion from different modalities, but it is challenging due to the very different image appearances across modalities. In this paper, we propose to train a non-rigid inter-modality image registration network, which can directly predict the transformation field from the input multimodal images, such as CT and MR images. In particular, the training of our inter-modality registration network is supervised by intra-modality similarity metric based on the available paired data, which is derived from a pre-aligned CT and MR dataset. Specifically, in the training stage, to register the input CT and MR images, their similarity is evaluated on the <i>warped MR image</i> and <i>the MR image that is paired with the input CT</i>. So that, the intra-modality similarity metric can be directly applied to measure whether the input CT and MR images are well registered. Moreover, we use the idea of dual-modality fashion, in which we measure the similarity on both CT modality and MR modality. In this way, the complementary anatomies in both modalities can be jointly considered to more accurately train the inter-modality registration network. In the testing stage, the trained inter-modality registration network can be directly applied to register the new multimodal images without any paired data. Experimental results have shown that, the proposed method can achieve promising accuracy and efficiency for the challenging non-rigid inter-modality registration task and also outperforms the state-of-the-art approaches.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"55-63"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00919-9_7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37251892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Early Diagnosis of Autism Disease by Multi-channel CNNs. 多通道细胞神经网络对自闭症的早期诊断。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_35
Guannan Li, Mingxia Liu, Quansen Sun, Dinggang Shen, Li Wang
{"title":"Early Diagnosis of Autism Disease by Multi-channel CNNs.","authors":"Guannan Li, Mingxia Liu, Quansen Sun, Dinggang Shen, Li Wang","doi":"10.1007/978-3-030-00919-9_35","DOIUrl":"10.1007/978-3-030-00919-9_35","url":null,"abstract":"<p><p>Currently there are still no early biomarkers to detect infants with risk of autism spectrum disorder (ASD), which is mainly diagnosed based on behavior observations at three or four years old. Since intervention efforts may miss a critical developmental window after 2 years old, it is significant to identify imaging-based biomarkers for early diagnosis of ASD. Although some methods using magnetic resonance imaging (MRI) for brain disease prediction have been proposed in the last decade, few of them were developed for predicting ASD in early age. Inspired by deep multi-instance learning, in this paper, we propose a patch-level data-expanding strategy for multi-channel convolutional neural networks to automatically identify infants with risk of ASD in early age. Experiments were conducted on the National Database for Autism Research (NDAR), with results showing that our proposed method can significantly improve the performance of early diagnosis of ASD.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"303-309"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6235442/pdf/nihms-994933.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36743556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Accurate Infant Cerebellar Tissue Segmentation with Densely Connected Convolutional Network. 利用密集连接卷积网络实现婴儿小脑组织的自动精确分割。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_27
Jiawei Chen, Han Zhang, Dong Nie, Li Wang, Gang Li, Weili Lin, Dinggang Shen
{"title":"Automatic Accurate Infant Cerebellar Tissue Segmentation with Densely Connected Convolutional Network.","authors":"Jiawei Chen,&nbsp;Han Zhang,&nbsp;Dong Nie,&nbsp;Li Wang,&nbsp;Gang Li,&nbsp;Weili Lin,&nbsp;Dinggang Shen","doi":"10.1007/978-3-030-00919-9_27","DOIUrl":"10.1007/978-3-030-00919-9_27","url":null,"abstract":"<p><p>The human cerebellum has been recognized as a key brain structure for motor control and cognitive function regulation. Investigation of brain functional development in the early life has recently been focusing on both cerebral and cerebellar development. Accurate segmentation of the infant cerebellum into different tissues is among the most important steps for quantitative development studies. However, this is extremely challenging due to the weak tissue contrast, extremely folded structures, and severe partial volume effect. To date, there are very few works touching infant cerebellum segmentation. We tackle this challenge by proposing a densely connected convolutional network to learn robust feature representations of different cerebellar tissues towards automatic and accurate segmentation. Specifically, we develop a novel deep neural network architecture by directly connecting all the layers to ensure maximum information flow even among distant layers in the network. This is distinct from all previous studies. Importantly, the outputs from all previous layers are passed to all subsequent layers as contextual features that can guide the segmentation. Our method achieved superior performance than other state-of-the-art methods when applied to Baby Connectome Project (BCP) data consisting of both 6- and 12-month-old infant brain images.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"233-240"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00919-9_27","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36624677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信