Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...最新文献
Wim E. Crusio, J. Lambris, Gobert N. Lee, H. Fujita
{"title":"Deep Learning in Medical Image Analysis: Challenges and Applications","authors":"Wim E. Crusio, J. Lambris, Gobert N. Lee, H. Fujita","doi":"10.1007/978-3-030-33128-3","DOIUrl":"https://doi.org/10.1007/978-3-030-33128-3","url":null,"abstract":"","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83144475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Stoyanov, Z. Taylor, G. Carneiro, T. Syeda-Mahmood, Anne L. Martel, L. Maier-Hein, J. Tavares, A. Bradley, J. Papa, Vasileios Belagiannis, J. Nascimento, Zhi Lu, Sailesh Conjeti, M. Moradi, H. Greenspan, A. Madabhushi
{"title":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings","authors":"D. Stoyanov, Z. Taylor, G. Carneiro, T. Syeda-Mahmood, Anne L. Martel, L. Maier-Hein, J. Tavares, A. Bradley, J. Papa, Vasileios Belagiannis, J. Nascimento, Zhi Lu, Sailesh Conjeti, M. Moradi, H. Greenspan, A. Madabhushi","doi":"10.1007/978-3-030-00889-5","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5","url":null,"abstract":"","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75601232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contextual Additive Networks to Efficiently Boost 3D Image Segmentations","authors":"Zhenlin Xu, Zhengyang Shen, M. Niethammer","doi":"10.1007/978-3-030-00889-5_11","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_11","url":null,"abstract":"","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"307 1","pages":"92-100"},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91308586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semi-Automated Extraction of Crohns Disease MR Imaging Markers using a 3D Residual CNN with Distance Prior.","authors":"Yechiel Lamash, Sila Kurugol, Simon K Warfield","doi":"10.1007/978-3-030-00889-5_25","DOIUrl":"10.1007/978-3-030-00889-5_25","url":null,"abstract":"<p><p>We propose a 3D residual convolutional neural network (CNN) algorithm with an integrated distance prior for segmenting the small bowel lumen and wall to enable extraction of pediatric Crohns disease (pCD) imaging markers from T1-weighted contrast-enhanced MR images. Our proposed segmentation framework enables, for the first time, to quantitatively assess luminal narrowing and dilation in CD aimed at optimizing surgical decisions as well as analyzing bowel wall thickness and tissue enhancement for assessment of response to therapy. Given seed points along the bowel lumen, the proposed algorithm automatically extracts 3D image patches centered on these points and a distance map from the interpolated centerline. These 3D patches and corresponding distance map are jointly used by the proposed residual CNN architecture to segment the lumen and the wall, and to extract imaging markers. Due to lack of available training data, we also propose a novel and efficient semi-automated segmentation algorithm based on graph-cuts technique as well as a software tool for quickly editing labeled data that was used to train our proposed CNN model. The method which is based on curved planar reformation of the small bowel is also useful for visualizing, manually refining, and measuring pCD imaging markers. In preliminary experiments, our CNN network obtained Dice coefficients of 75 ± 18%, 81 ± 8% and 97 ± 2% for the lumen, wall and background, respectively.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"218-226"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6235454/pdf/nihms-995214.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36743553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Xiang, Yang Li, Weili Lin, Qian Wang, Dinggang Shen
{"title":"Unpaired Deep Cross-Modality Synthesis with Fast Training.","authors":"Lei Xiang, Yang Li, Weili Lin, Qian Wang, Dinggang Shen","doi":"10.1007/978-3-030-00889-5_18","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_18","url":null,"abstract":"<p><p>Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"155-164"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00889-5_18","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37042832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contextual Additive Networks to Efficiently Boost 3D Image Segmentations.","authors":"Zhenlin Xu, Zhengyang Shen, Marc Niethammer","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Semantic segmentation for 3D medical images is an important task for medical image analysis which would benefit from more efficient approaches. We propose a 3D segmentation framework of cascaded fully convolutional networks (FCNs) with contextual inputs and additive outputs. Compared to previous contextual cascaded networks the additive output forces each subsequent model to refine the output of previous models in the cascade. We use U-Nets of various complexity as elementary FCNs and demonstrate our method for cartilage segmentation on a large set of 3D magnetic resonance images (MRI) of the knee. We show that a cascade of simple U-Nets may for certain tasks be superior to a single deep and complex U-Net with almost two orders of magnitude more parameters. Our framework also allows greater flexibility in trading-off performance and efficiency during testing and training.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"92-100"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6590074/pdf/nihms-1033318.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41223266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang
{"title":"UNet++: A Nested U-Net Architecture for Medical Image Segmentation.","authors":"Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang","doi":"10.1007/978-3-030-00889-5_1","DOIUrl":"10.1007/978-3-030-00889-5_1","url":null,"abstract":"<p><p>In this paper, we present UNet++, a new, more powerful architecture for medical image segmentation. Our architecture is essentially a deeply-supervised encoder-decoder network where the encoder and decoder sub-networks are connected through a series of nested, dense skip pathways. The re-designed skip pathways aim at reducing the semantic gap between the feature maps of the encoder and decoder sub-networks. We argue that the optimizer would deal with an easier learning task when the feature maps from the decoder and encoder networks are semantically similar. We have evaluated UNet++ in comparison with U-Net and wide U-Net architectures across multiple medical image segmentation tasks: nodule segmentation in the low-dose CT scans of chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos. Our experiments demonstrate that UNet++ with deep supervision achieves an average IoU gain of 3.9 and 3.4 points over U-Net and wide U-Net, respectively.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"3-11"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7329239/pdf/nihms-1600717.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38108534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jamshid Sourati, Ali Gholipour, Jennifer G Dy, Sila Kurugol, Simon K Warfield
{"title":"Active Deep Learning with Fisher Information for Patch-wise Semantic Segmentation.","authors":"Jamshid Sourati, Ali Gholipour, Jennifer G Dy, Sila Kurugol, Simon K Warfield","doi":"10.1007/978-3-030-00889-5_10","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_10","url":null,"abstract":"<p><p>Deep learning with convolutional neural networks (CNN) has achieved unprecedented success in segmentation, however it requires large training data, which is expensive to obtain. Active Learning (AL) frameworks can facilitate major improvements in CNN performance with intelligent selection of minimal data to be labeled. This paper proposes a novel diversified AL based on Fisher information (FI) for the first time for CNNs, where gradient computations from backpropagation are used for efficient computation of FI on the large CNN parameter space. We evaluated the proposed method in the context of newborn and adolescent brain extraction problem under two scenarios: (1) semi-automatic segmentation of a particular subject from a different age group or with a pathology not available in the original training data, where starting from an inaccurate pre-trained model, we iteratively label small number of voxels queried by AL until the model generates accurate segmentation for that subject, and (2) using AL to build a universal model generalizable to all images in a given data set. In both scenarios, FI-based AL improved performance after labeling a small percentage (less than 0.05%) of voxels. The results showed that FI-based AL significantly outperformed random sampling, and achieved accuracy higher than entropy-based querying in transfer learning, where the model learns to extract brains of newborn subjects given an initial model trained on adolescents.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"83-91"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00889-5_10","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36682771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danielle F Pace, Adrian V Dalca, Tom Brosch, Tal Geva, Andrew J Powell, Jürgen Weese, Mehdi H Moghari, Polina Golland
{"title":"Iterative Segmentation from Limited Training Data: Applications to Congenital Heart Disease.","authors":"Danielle F Pace, Adrian V Dalca, Tom Brosch, Tal Geva, Andrew J Powell, Jürgen Weese, Mehdi H Moghari, Polina Golland","doi":"10.1007/978-3-030-00889-5_38","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_38","url":null,"abstract":"<p><p>We propose a new iterative segmentation model which can be accurately learned from a small dataset. A common approach is to train a model to directly segment an image, requiring a large collection of manually annotated images to capture the anatomical variability in a cohort. In contrast, we develop a segmentation model that recursively evolves a segmentation in several steps, and implement it as a recurrent neural network. We learn model parameters by optimizing the intermediate steps of the evolution in addition to the final segmentation. To this end, we train our segmentation propagation model by presenting incomplete and/or inaccurate input segmentations paired with a recommended next step. Our work aims to alleviate challenges in segmenting heart structures from cardiac MRI for patients with congenital heart disease (CHD), which encompasses a range of morphological deformations and topological changes. We demonstrate the advantages of this approach on a dataset of 20 images from CHD patients, learning a model that accurately segments individual heart chambers and great vessels. Compared to direct segmentation, the iterative method yields more accurate segmentation for patients with the most severe CHD malformations.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":" ","pages":"334-342"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00889-5_38","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40550276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}