Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...最新文献

筛选
英文 中文
Deep Learning in Medical Image Analysis: Challenges and Applications 医学图像分析中的深度学习:挑战和应用
Wim E. Crusio, J. Lambris, Gobert N. Lee, H. Fujita
{"title":"Deep Learning in Medical Image Analysis: Challenges and Applications","authors":"Wim E. Crusio, J. Lambris, Gobert N. Lee, H. Fujita","doi":"10.1007/978-3-030-33128-3","DOIUrl":"https://doi.org/10.1007/978-3-030-33128-3","url":null,"abstract":"","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83144475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 医学图像分析中的深度学习和临床决策支持的多模式学习:第四届国际研讨会,DLMIA 2018,第八届国际研讨会,ML-CDS 2018,与MICCAI 2018一起举行,西班牙格拉纳达,2018年9月20日,会议记录
D. Stoyanov, Z. Taylor, G. Carneiro, T. Syeda-Mahmood, Anne L. Martel, L. Maier-Hein, J. Tavares, A. Bradley, J. Papa, Vasileios Belagiannis, J. Nascimento, Zhi Lu, Sailesh Conjeti, M. Moradi, H. Greenspan, A. Madabhushi
{"title":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings","authors":"D. Stoyanov, Z. Taylor, G. Carneiro, T. Syeda-Mahmood, Anne L. Martel, L. Maier-Hein, J. Tavares, A. Bradley, J. Papa, Vasileios Belagiannis, J. Nascimento, Zhi Lu, Sailesh Conjeti, M. Moradi, H. Greenspan, A. Madabhushi","doi":"10.1007/978-3-030-00889-5","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5","url":null,"abstract":"","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75601232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Contextual Additive Networks to Efficiently Boost 3D Image Segmentations 上下文添加网络有效地促进3D图像分割
Zhenlin Xu, Zhengyang Shen, M. Niethammer
{"title":"Contextual Additive Networks to Efficiently Boost 3D Image Segmentations","authors":"Zhenlin Xu, Zhengyang Shen, M. Niethammer","doi":"10.1007/978-3-030-00889-5_11","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_11","url":null,"abstract":"","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"307 1","pages":"92-100"},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91308586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Semi-Automated Extraction of Crohns Disease MR Imaging Markers using a 3D Residual CNN with Distance Prior. 利用具有距离优先权的三维残差 CNN 半自动提取克罗恩病磁共振成像标记物
Yechiel Lamash, Sila Kurugol, Simon K Warfield
{"title":"Semi-Automated Extraction of Crohns Disease MR Imaging Markers using a 3D Residual CNN with Distance Prior.","authors":"Yechiel Lamash, Sila Kurugol, Simon K Warfield","doi":"10.1007/978-3-030-00889-5_25","DOIUrl":"10.1007/978-3-030-00889-5_25","url":null,"abstract":"<p><p>We propose a 3D residual convolutional neural network (CNN) algorithm with an integrated distance prior for segmenting the small bowel lumen and wall to enable extraction of pediatric Crohns disease (pCD) imaging markers from T1-weighted contrast-enhanced MR images. Our proposed segmentation framework enables, for the first time, to quantitatively assess luminal narrowing and dilation in CD aimed at optimizing surgical decisions as well as analyzing bowel wall thickness and tissue enhancement for assessment of response to therapy. Given seed points along the bowel lumen, the proposed algorithm automatically extracts 3D image patches centered on these points and a distance map from the interpolated centerline. These 3D patches and corresponding distance map are jointly used by the proposed residual CNN architecture to segment the lumen and the wall, and to extract imaging markers. Due to lack of available training data, we also propose a novel and efficient semi-automated segmentation algorithm based on graph-cuts technique as well as a software tool for quickly editing labeled data that was used to train our proposed CNN model. The method which is based on curved planar reformation of the small bowel is also useful for visualizing, manually refining, and measuring pCD imaging markers. In preliminary experiments, our CNN network obtained Dice coefficients of 75 ± 18%, 81 ± 8% and 97 ± 2% for the lumen, wall and background, respectively.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"218-226"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6235454/pdf/nihms-995214.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36743553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unpaired Deep Cross-Modality Synthesis with Fast Training. 非配对深度交叉模态合成与快速训练。
Lei Xiang, Yang Li, Weili Lin, Qian Wang, Dinggang Shen
{"title":"Unpaired Deep Cross-Modality Synthesis with Fast Training.","authors":"Lei Xiang,&nbsp;Yang Li,&nbsp;Weili Lin,&nbsp;Qian Wang,&nbsp;Dinggang Shen","doi":"10.1007/978-3-030-00889-5_18","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_18","url":null,"abstract":"<p><p>Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"155-164"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00889-5_18","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37042832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Contextual Additive Networks to Efficiently Boost 3D Image Segmentations. 上下文相加网络,有效提升3D图像分割。
Zhenlin Xu, Zhengyang Shen, Marc Niethammer
{"title":"Contextual Additive Networks to Efficiently Boost 3D Image Segmentations.","authors":"Zhenlin Xu,&nbsp;Zhengyang Shen,&nbsp;Marc Niethammer","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Semantic segmentation for 3D medical images is an important task for medical image analysis which would benefit from more efficient approaches. We propose a 3D segmentation framework of cascaded fully convolutional networks (FCNs) with contextual inputs and additive outputs. Compared to previous contextual cascaded networks the additive output forces each subsequent model to refine the output of previous models in the cascade. We use U-Nets of various complexity as elementary FCNs and demonstrate our method for cartilage segmentation on a large set of 3D magnetic resonance images (MRI) of the knee. We show that a cascade of simple U-Nets may for certain tasks be superior to a single deep and complex U-Net with almost two orders of magnitude more parameters. Our framework also allows greater flexibility in trading-off performance and efficiency during testing and training.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"92-100"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6590074/pdf/nihms-1033318.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41223266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UNet++: A Nested U-Net Architecture for Medical Image Segmentation. UNet++:用于医学图像分割的嵌套 U-Net 架构
Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang
{"title":"UNet++: A Nested U-Net Architecture for Medical Image Segmentation.","authors":"Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang","doi":"10.1007/978-3-030-00889-5_1","DOIUrl":"10.1007/978-3-030-00889-5_1","url":null,"abstract":"<p><p>In this paper, we present UNet++, a new, more powerful architecture for medical image segmentation. Our architecture is essentially a deeply-supervised encoder-decoder network where the encoder and decoder sub-networks are connected through a series of nested, dense skip pathways. The re-designed skip pathways aim at reducing the semantic gap between the feature maps of the encoder and decoder sub-networks. We argue that the optimizer would deal with an easier learning task when the feature maps from the decoder and encoder networks are semantically similar. We have evaluated UNet++ in comparison with U-Net and wide U-Net architectures across multiple medical image segmentation tasks: nodule segmentation in the low-dose CT scans of chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos. Our experiments demonstrate that UNet++ with deep supervision achieves an average IoU gain of 3.9 and 3.4 points over U-Net and wide U-Net, respectively.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"3-11"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7329239/pdf/nihms-1600717.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38108534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Deep Learning with Fisher Information for Patch-wise Semantic Segmentation. 基于Fisher信息的主动深度学习补丁语义分割。
Jamshid Sourati, Ali Gholipour, Jennifer G Dy, Sila Kurugol, Simon K Warfield
{"title":"Active Deep Learning with Fisher Information for Patch-wise Semantic Segmentation.","authors":"Jamshid Sourati,&nbsp;Ali Gholipour,&nbsp;Jennifer G Dy,&nbsp;Sila Kurugol,&nbsp;Simon K Warfield","doi":"10.1007/978-3-030-00889-5_10","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_10","url":null,"abstract":"<p><p>Deep learning with convolutional neural networks (CNN) has achieved unprecedented success in segmentation, however it requires large training data, which is expensive to obtain. Active Learning (AL) frameworks can facilitate major improvements in CNN performance with intelligent selection of minimal data to be labeled. This paper proposes a novel diversified AL based on Fisher information (FI) for the first time for CNNs, where gradient computations from backpropagation are used for efficient computation of FI on the large CNN parameter space. We evaluated the proposed method in the context of newborn and adolescent brain extraction problem under two scenarios: (1) semi-automatic segmentation of a particular subject from a different age group or with a pathology not available in the original training data, where starting from an inaccurate pre-trained model, we iteratively label small number of voxels queried by AL until the model generates accurate segmentation for that subject, and (2) using AL to build a universal model generalizable to all images in a given data set. In both scenarios, FI-based AL improved performance after labeling a small percentage (less than 0.05%) of voxels. The results showed that FI-based AL significantly outperformed random sampling, and achieved accuracy higher than entropy-based querying in transfer learning, where the model learns to extract brains of newborn subjects given an initial model trained on adolescents.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"83-91"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00889-5_10","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36682771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Iterative Segmentation from Limited Training Data: Applications to Congenital Heart Disease. 有限训练数据的迭代分割:在先天性心脏病中的应用。
Danielle F Pace, Adrian V Dalca, Tom Brosch, Tal Geva, Andrew J Powell, Jürgen Weese, Mehdi H Moghari, Polina Golland
{"title":"Iterative Segmentation from Limited Training Data: Applications to Congenital Heart Disease.","authors":"Danielle F Pace,&nbsp;Adrian V Dalca,&nbsp;Tom Brosch,&nbsp;Tal Geva,&nbsp;Andrew J Powell,&nbsp;Jürgen Weese,&nbsp;Mehdi H Moghari,&nbsp;Polina Golland","doi":"10.1007/978-3-030-00889-5_38","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_38","url":null,"abstract":"<p><p>We propose a new iterative segmentation model which can be accurately learned from a small dataset. A common approach is to train a model to directly segment an image, requiring a large collection of manually annotated images to capture the anatomical variability in a cohort. In contrast, we develop a segmentation model that recursively evolves a segmentation in several steps, and implement it as a recurrent neural network. We learn model parameters by optimizing the intermediate steps of the evolution in addition to the final segmentation. To this end, we train our segmentation propagation model by presenting incomplete and/or inaccurate input segmentations paired with a recommended next step. Our work aims to alleviate challenges in segmenting heart structures from cardiac MRI for patients with congenital heart disease (CHD), which encompasses a range of morphological deformations and topological changes. We demonstrate the advantages of this approach on a dataset of 20 images from CHD patients, learning a model that accurately segments individual heart chambers and great vessels. Compared to direct segmentation, the iterative method yields more accurate segmentation for patients with the most severe CHD malformations.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":" ","pages":"334-342"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00889-5_38","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40550276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信