{"title":"Multi-stage Diagnosis of Alzheimer's Disease with Incomplete Multimodal Data via Multi-task Deep Learning.","authors":"Kim-Han Thung, Pew-Thian Yap, Dinggang Shen","doi":"10.1007/978-3-319-67558-9_19","DOIUrl":null,"url":null,"abstract":"<p><p>Utilization of biomedical data from multiple modalities improves the diagnostic accuracy of neurodegenerative diseases. However, multi-modality data are often incomplete because not all data can be collected for every individual. When using such incomplete data for diagnosis, current approaches for addressing the problem of missing data, such as imputation, matrix completion and multi-task learning, implicitly assume linear data-to-label relationship, therefore limiting their performances. We thus propose multi-task deep learning for incomplete data, where prediction tasks that are associated with different modality combinations are learnt jointly to improve the performance of each task. Specifically, we devise a multi-input multi-output deep learning framework, and train our deep network subnet-wise, partially updating its weights based on the availability of modality data. The experimental results using the ADNI dataset show that our method outperforms the state-of-the-art methods.</p>","PeriodicalId":92023,"journal":{"name":"Deep learning in medical image analysis and multimodal learning for clinical decision support : Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, held in conjunction with MICCAI 2017 Quebec City, QC,...","volume":"10553 ","pages":"160-168"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67558-9_19","citationCount":"49","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Deep learning in medical image analysis and multimodal learning for clinical decision support : Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, held in conjunction with MICCAI 2017 Quebec City, QC,...","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-319-67558-9_19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2017/9/9 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 49
Abstract
Utilization of biomedical data from multiple modalities improves the diagnostic accuracy of neurodegenerative diseases. However, multi-modality data are often incomplete because not all data can be collected for every individual. When using such incomplete data for diagnosis, current approaches for addressing the problem of missing data, such as imputation, matrix completion and multi-task learning, implicitly assume linear data-to-label relationship, therefore limiting their performances. We thus propose multi-task deep learning for incomplete data, where prediction tasks that are associated with different modality combinations are learnt jointly to improve the performance of each task. Specifically, we devise a multi-input multi-output deep learning framework, and train our deep network subnet-wise, partially updating its weights based on the availability of modality data. The experimental results using the ADNI dataset show that our method outperforms the state-of-the-art methods.