Andrew Janowczyk, Scott Doyle, Hannah Gilmore, Anant Madabhushi
{"title":"应用于数字病理图像核分割的分辨率自适应深度分层(RADHicaL)学习方案。","authors":"Andrew Janowczyk, Scott Doyle, Hannah Gilmore, Anant Madabhushi","doi":"10.1080/21681163.2016.1141063","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning (DL) has recently been successfully applied to a number of image analysis problems. However, DL approaches tend to be inefficient for segmentation on large image data, such as high-resolution digital pathology slide images. For example, typical breast biopsy images scanned at 40× magnification contain billions of pixels, of which usually only a small percentage belong to the class of interest. For a typical naïve deep learning scheme, parsing through and interrogating all the image pixels would represent hundreds if not thousands of hours of compute time using high performance computing environments. In this paper, we present a resolution adaptive deep hierarchical (RADHicaL) learning scheme wherein DL networks at lower resolutions are leveraged to determine if higher levels of magnification, and thus computation, are necessary to provide precise results. We evaluate our approach on a nuclear segmentation task with a cohort of 141 ER+ breast cancer images and show we can reduce computation time on average by about 85%. Expert annotations of 12,000 nuclei across these 141 images were employed for quantitative evaluation of RADHicaL. A head-to-head comparison with a naïve DL approach, operating solely at the highest magnification, yielded the following performance metrics: .9407 vs .9854 Detection Rate, .8218 vs .8489 <i>F</i>-score, .8061 vs .8364 true positive rate and .8822 vs 0.8932 positive predictive value. Our performance indices compare favourably with state of the art nuclear segmentation approaches for digital pathology images.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5935259/pdf/nihms801416.pdf","citationCount":"0","resultStr":"{\"title\":\"A resolution adaptive deep hierarchical (RADHicaL) learning scheme applied to nuclear segmentation of digital pathology images.\",\"authors\":\"Andrew Janowczyk, Scott Doyle, Hannah Gilmore, Anant Madabhushi\",\"doi\":\"10.1080/21681163.2016.1141063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Deep learning (DL) has recently been successfully applied to a number of image analysis problems. However, DL approaches tend to be inefficient for segmentation on large image data, such as high-resolution digital pathology slide images. For example, typical breast biopsy images scanned at 40× magnification contain billions of pixels, of which usually only a small percentage belong to the class of interest. For a typical naïve deep learning scheme, parsing through and interrogating all the image pixels would represent hundreds if not thousands of hours of compute time using high performance computing environments. In this paper, we present a resolution adaptive deep hierarchical (RADHicaL) learning scheme wherein DL networks at lower resolutions are leveraged to determine if higher levels of magnification, and thus computation, are necessary to provide precise results. We evaluate our approach on a nuclear segmentation task with a cohort of 141 ER+ breast cancer images and show we can reduce computation time on average by about 85%. Expert annotations of 12,000 nuclei across these 141 images were employed for quantitative evaluation of RADHicaL. A head-to-head comparison with a naïve DL approach, operating solely at the highest magnification, yielded the following performance metrics: .9407 vs .9854 Detection Rate, .8218 vs .8489 <i>F</i>-score, .8061 vs .8364 true positive rate and .8822 vs 0.8932 positive predictive value. Our performance indices compare favourably with state of the art nuclear segmentation approaches for digital pathology images.</p>\",\"PeriodicalId\":51800,\"journal\":{\"name\":\"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2018-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5935259/pdf/nihms801416.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/21681163.2016.1141063\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2016/4/28 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/21681163.2016.1141063","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2016/4/28 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
摘要
深度学习(DL)最近已成功应用于许多图像分析问题。然而,深度学习方法在分割大型图像数据(如高分辨率数字病理切片图像)时往往效率低下。例如,以 40 倍放大率扫描的典型乳腺活检图像包含数十亿像素,其中通常只有一小部分属于感兴趣的类别。对于一个典型的天真深度学习方案来说,使用高性能计算环境解析和查询所有图像像素将耗费数百甚至数千小时的计算时间。在本文中,我们提出了一种分辨率自适应深度分层(RADHicaL)学习方案,其中利用较低分辨率的 DL 网络来确定是否需要更高的放大级别,从而确定是否需要计算来提供精确的结果。我们利用一组 141 张 ER+ 乳腺癌图像对我们的方法进行了评估,结果表明我们可以将计算时间平均缩短约 85%。在对 RADHicaL 进行定量评估时,采用了专家对这 141 幅图像中的 12,000 个核进行的注释。我们将 RADHicaL 与仅在最高放大倍率下运行的天真 DL 方法进行了正面比较,得出了以下性能指标:.9407 vs .9854 检测率、.8218 vs .8489 F-score、.8061 vs .8364 真阳性率和 .8822 vs 0.8932 阳性预测值。我们的性能指标与最先进的数字病理图像核分割方法相比毫不逊色。
A resolution adaptive deep hierarchical (RADHicaL) learning scheme applied to nuclear segmentation of digital pathology images.
Deep learning (DL) has recently been successfully applied to a number of image analysis problems. However, DL approaches tend to be inefficient for segmentation on large image data, such as high-resolution digital pathology slide images. For example, typical breast biopsy images scanned at 40× magnification contain billions of pixels, of which usually only a small percentage belong to the class of interest. For a typical naïve deep learning scheme, parsing through and interrogating all the image pixels would represent hundreds if not thousands of hours of compute time using high performance computing environments. In this paper, we present a resolution adaptive deep hierarchical (RADHicaL) learning scheme wherein DL networks at lower resolutions are leveraged to determine if higher levels of magnification, and thus computation, are necessary to provide precise results. We evaluate our approach on a nuclear segmentation task with a cohort of 141 ER+ breast cancer images and show we can reduce computation time on average by about 85%. Expert annotations of 12,000 nuclei across these 141 images were employed for quantitative evaluation of RADHicaL. A head-to-head comparison with a naïve DL approach, operating solely at the highest magnification, yielded the following performance metrics: .9407 vs .9854 Detection Rate, .8218 vs .8489 F-score, .8061 vs .8364 true positive rate and .8822 vs 0.8932 positive predictive value. Our performance indices compare favourably with state of the art nuclear segmentation approaches for digital pathology images.
期刊介绍:
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization is an international journal whose main goals are to promote solutions of excellence for both imaging and visualization of biomedical data, and establish links among researchers, clinicians, the medical technology sector and end-users. The journal provides a comprehensive forum for discussion of the current state-of-the-art in the scientific fields related to imaging and visualization, including, but not limited to: Applications of Imaging and Visualization Computational Bio- imaging and Visualization Computer Aided Diagnosis, Surgery, Therapy and Treatment Data Processing and Analysis Devices for Imaging and Visualization Grid and High Performance Computing for Imaging and Visualization Human Perception in Imaging and Visualization Image Processing and Analysis Image-based Geometric Modelling Imaging and Visualization in Biomechanics Imaging and Visualization in Biomedical Engineering Medical Clinics Medical Imaging and Visualization Multi-modal Imaging and Visualization Multiscale Imaging and Visualization Scientific Visualization Software Development for Imaging and Visualization Telemedicine Systems and Applications Virtual Reality Visual Data Mining and Knowledge Discovery.