A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer
{"title":"使用辅助标签的有限注释的深度网络解剖分割","authors":"A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer","doi":"10.1109/ISBI.2019.8759488","DOIUrl":null,"url":null,"abstract":"Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Network Anatomy Segmentation with Limited Annotations using Auxiliary Labels\",\"authors\":\"A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer\",\"doi\":\"10.1109/ISBI.2019.8759488\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.\",\"PeriodicalId\":119935,\"journal\":{\"name\":\"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISBI.2019.8759488\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBI.2019.8759488","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
深度卷积神经网络(cnn)在解剖学分割中表现出令人印象深刻的性能,接近最先进的基于地图集的分割方法。一方面,cnn的预测速度比基于atlas的分割快20倍。然而,CNN进步的主要障碍之一是它的训练需要大量的注释数据。这是一个代价高昂的障碍,因为注释既耗时又需要昂贵的医疗专业知识。这项工作的目标是使用最少的昂贵的手动注释来达到最先进的分割性能。最近的研究表明,辅助分词可以与人工标注一起使用,以提高CNN的学习效果。为了使这种学习方案更有效,我们提出了一种图像选择算法,该算法明智地选择图像进行手动标注以产生更准确的辅助分割,以及一种质量控制算法,该算法从CNN训练中排除质量差的辅助分割。我们通过改变用于基于图集的方法的手动注释的数量和通过改变辅助分割的数量来训练CNN,对胸部CT数据集进行了广泛的实验。我们的研究结果表明,使用辅助分割训练的CNN在使用少量准确的人工分割训练时获得了0.76比0.58的更高的dice。此外,在训练100个或更多的辅助分割时,CNN总是优于基于地图集的方法。最后,在精心选择单个图集进行辅助分割并控制辅助分割质量时,训练后的CNN在使用随机选择的图像进行所有辅助分割的手动标注时,平均概率为0.72 vs 0.62。
Deep Network Anatomy Segmentation with Limited Annotations using Auxiliary Labels
Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.