Deep learning in medical image analysis and multimodal learning for clinical decision support : Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, held in conjunction with MICCAI 2017 Quebec City, QC,...最新文献
Behrouz Saghafi, Prabhat Garg, B. Wagner, S. C. Smith, Jianzhao Xu, A. Madhuranthakam, Youngkyoo Jung, J. Divers, B. Freedman, J. Maldjian, A. Montillo
{"title":"Quantifying the Impact of Type 2 Diabetes on Brain Perfusion Using Deep Neural Networks","authors":"Behrouz Saghafi, Prabhat Garg, B. Wagner, S. C. Smith, Jianzhao Xu, A. Madhuranthakam, Youngkyoo Jung, J. Divers, B. Freedman, J. Maldjian, A. Montillo","doi":"10.1007/978-3-319-67558-9_18","DOIUrl":"https://doi.org/10.1007/978-3-319-67558-9_18","url":null,"abstract":"","PeriodicalId":92023,"journal":{"name":"Deep learning in medical image analysis and multimodal learning for clinical decision support : Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, held in conjunction with MICCAI 2017 Quebec City, QC,...","volume":"5 1","pages":"151-159"},"PeriodicalIF":0.0,"publicationDate":"2017-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89334877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roger Trullo, Caroline Petitjean, Dong Nie, Dinggang Shen, Su Ruan
{"title":"Joint Segmentation of Multiple Thoracic Organs in CT Images with Two Collaborative Deep Architectures.","authors":"Roger Trullo, Caroline Petitjean, Dong Nie, Dinggang Shen, Su Ruan","doi":"10.1007/978-3-319-67558-9_3","DOIUrl":"10.1007/978-3-319-67558-9_3","url":null,"abstract":"<p><p>Computed Tomography (CT) is the standard imaging technique for radiotherapy planning. The delineation of Organs at Risk (OAR) in thoracic CT images is a necessary step before radiotherapy, for preventing irradiation of healthy organs. However, due to low contrast, multi-organ segmentation is a challenge. In this paper, we focus on developing a novel framework for automatic delineation of OARs. Different from previous works in OAR segmentation where each organ is segmented separately, we propose two collaborative deep architectures to jointly segment all organs, including esophagus, heart, aorta and trachea. Since most of the organ borders are ill-defined, we believe spatial relationships must be taken into account to overcome the lack of contrast. The aim of combining two networks is to learn anatomical constraints with the first network, which will be used in the second network, when each OAR is segmented in turn. Specifically, we use the first deep architecture, a deep SharpMask architecture, for providing an effective combination of low-level representations with deep high-level features, and then take into account the spatial relationships between organs by the use of Conditional Random Fields (CRF). Next, the second deep architecture is employed to refine the segmentation of each organ by using the maps obtained on the first deep architecture to learn anatomical constraints for guiding and refining the segmentations. Experimental results show superior performance on 30 CT scans, comparing with other state-of-the-art methods.</p>","PeriodicalId":92023,"journal":{"name":"Deep learning in medical image analysis and multimodal learning for clinical decision support : Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, held in conjunction with MICCAI 2017 Quebec City, QC,...","volume":"10553 ","pages":"21-29"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5918174/pdf/nihms960274.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36056708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-stage Diagnosis of Alzheimer's Disease with Incomplete Multimodal Data via Multi-task Deep Learning.","authors":"Kim-Han Thung, Pew-Thian Yap, Dinggang Shen","doi":"10.1007/978-3-319-67558-9_19","DOIUrl":"https://doi.org/10.1007/978-3-319-67558-9_19","url":null,"abstract":"<p><p>Utilization of biomedical data from multiple modalities improves the diagnostic accuracy of neurodegenerative diseases. However, multi-modality data are often incomplete because not all data can be collected for every individual. When using such incomplete data for diagnosis, current approaches for addressing the problem of missing data, such as imputation, matrix completion and multi-task learning, implicitly assume linear data-to-label relationship, therefore limiting their performances. We thus propose multi-task deep learning for incomplete data, where prediction tasks that are associated with different modality combinations are learnt jointly to improve the performance of each task. Specifically, we devise a multi-input multi-output deep learning framework, and train our deep network subnet-wise, partially updating its weights based on the availability of modality data. The experimental results using the ADNI dataset show that our method outperforms the state-of-the-art methods.</p>","PeriodicalId":92023,"journal":{"name":"Deep learning in medical image analysis and multimodal learning for clinical decision support : Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, held in conjunction with MICCAI 2017 Quebec City, QC,...","volume":"10553 ","pages":"160-168"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67558-9_19","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35573219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carole H Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, M Jorge Cardoso
{"title":"Generalised Dice Overlap as a Deep Learning Loss Function for Highly Unbalanced Segmentations.","authors":"Carole H Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, M Jorge Cardoso","doi":"10.1007/978-3-319-67558-9_28","DOIUrl":"10.1007/978-3-319-67558-9_28","url":null,"abstract":"<p><p>Deep-learning has proved in recent years to be a powerful tool for image analysis and is now widely used to segment both 2D and 3D medical images. Deep-learning segmentation frameworks rely not only on the choice of network architecture but also on the choice of loss function. When the segmentation process targets rare observations, a severe class imbalance is likely to occur between candidate labels, thus resulting in sub-optimal performance. In order to mitigate this issue, strategies such as the weighted cross-entropy function, the sensitivity function or the Dice loss function, have been proposed. In this work, we investigate the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. We also propose to use the class re-balancing properties of the Generalized Dice overlap, a known metric for segmentation assessment, as a robust and accurate deep-learning loss function for unbalanced tasks.</p>","PeriodicalId":92023,"journal":{"name":"Deep learning in medical image analysis and multimodal learning for clinical decision support : Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, held in conjunction with MICCAI 2017 Quebec City, QC,...","volume":"2017 ","pages":"240-248"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7610921/pdf/EMS126388.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39076898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shikha Chaganti, Jamie R Robinson, Camilo Bermudez, Thomas Lasko, Louise A Mawn, Bennett A Landman
{"title":"EMR-Radiological Phenotypes in Diseases of the Optic Nerve and their Association with Visual Function.","authors":"Shikha Chaganti, Jamie R Robinson, Camilo Bermudez, Thomas Lasko, Louise A Mawn, Bennett A Landman","doi":"10.1007/978-3-319-67558-9_43","DOIUrl":"https://doi.org/10.1007/978-3-319-67558-9_43","url":null,"abstract":"<p><p>Multi-modal analyses of diseases of the optic nerve, that combine radiological imaging with other electronic medical records (EMR), improve understanding of visual function. We conducted a study of 55 patients with glaucoma and 32 patients with thyroid eye disease (TED). We collected their visual assessments, orbital CT imaging, and EMR data. We developed an image-processing pipeline that segmented and extracted structural metrics from CT images. We derive EMR phenotype vectors with the help of PheWAS (from diagnostic codes) and ProWAS (from treatment codes). Next, we performed a principal component analysis and multiple-correspondence analysis to identify their association with visual function scores. We find that structural metrics derived from CT imaging are significantly associated with functional visual score for both glaucoma (R<sup>2</sup>=0.32) and TED (R<sup>2</sup>=0.4). Addition of EMR phenotype vectors to the model significantly improved (p<1E-04) the R<sup>2</sup> to 0.4 for glaucoma and 0.54 for TED.</p>","PeriodicalId":92023,"journal":{"name":"Deep learning in medical image analysis and multimodal learning for clinical decision support : Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, held in conjunction with MICCAI 2017 Quebec City, QC,...","volume":"2017 ","pages":"373-381"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67558-9_43","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35787566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}