High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms.

George Teodoro, Tony Pan, Tahsin M Kurc, Jun Kong, Lee A D Cooper, Norbert Podhorszki, Scott Klasky, Joel H Saltz
{"title":"High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms.","authors":"George Teodoro, Tony Pan, Tahsin M Kurc, Jun Kong, Lee A D Cooper, Norbert Podhorszki, Scott Klasky, Joel H Saltz","doi":"10.1109/IPDPS.2013.11","DOIUrl":null,"url":null,"abstract":"<p><p>Analysis of large pathology image datasets offers significant opportunities for the investigation of disease morphology, but the resource requirements of analysis pipelines limit the scale of such studies. Motivated by a brain cancer study, we propose and evaluate a parallel image analysis application pipeline for high throughput computation of large datasets of high resolution pathology tissue images on distributed CPU-GPU platforms. To achieve efficient execution on these hybrid systems, we have built runtime support that allows us to express the cancer image analysis application as a hierarchical data processing pipeline. The application is implemented as a coarse-grain pipeline of stages, where each stage may be further partitioned into another pipeline of fine-grain operations. The fine-grain operations are efficiently managed and scheduled for computation on CPUs and GPUs using performance aware scheduling techniques along with several optimizations, including architecture aware process placement, data locality conscious task assignment, data prefetching, and asynchronous data copy. These optimizations are employed to maximize the utilization of the aggregate computing power of CPUs and GPUs and minimize data copy overheads. Our experimental evaluation shows that the cooperative use of CPUs and GPUs achieves significant improvements on top of GPU-only versions (up to 1.6×) and that the execution of the application as a set of fine-grain operations provides more opportunities for runtime optimizations and attains better performance than coarser-grain, monolithic implementations used in other works. An implementation of the cancer image analysis pipeline using the runtime support was able to process an image dataset consisting of 36,848 4Kx4K-pixel image tiles (about 1.8TB uncompressed) in less than 4 minutes (150 tiles/second) on 100 nodes of a state-of-the-art hybrid cluster system.</p>","PeriodicalId":89233,"journal":{"name":"Proceedings. IPDPS (Conference)","volume":"2013 ","pages":"103-114"},"PeriodicalIF":0.0000,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4240318/pdf/nihms-608079.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IPDPS (Conference)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPS.2013.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Analysis of large pathology image datasets offers significant opportunities for the investigation of disease morphology, but the resource requirements of analysis pipelines limit the scale of such studies. Motivated by a brain cancer study, we propose and evaluate a parallel image analysis application pipeline for high throughput computation of large datasets of high resolution pathology tissue images on distributed CPU-GPU platforms. To achieve efficient execution on these hybrid systems, we have built runtime support that allows us to express the cancer image analysis application as a hierarchical data processing pipeline. The application is implemented as a coarse-grain pipeline of stages, where each stage may be further partitioned into another pipeline of fine-grain operations. The fine-grain operations are efficiently managed and scheduled for computation on CPUs and GPUs using performance aware scheduling techniques along with several optimizations, including architecture aware process placement, data locality conscious task assignment, data prefetching, and asynchronous data copy. These optimizations are employed to maximize the utilization of the aggregate computing power of CPUs and GPUs and minimize data copy overheads. Our experimental evaluation shows that the cooperative use of CPUs and GPUs achieves significant improvements on top of GPU-only versions (up to 1.6×) and that the execution of the application as a set of fine-grain operations provides more opportunities for runtime optimizations and attains better performance than coarser-grain, monolithic implementations used in other works. An implementation of the cancer image analysis pipeline using the runtime support was able to process an image dataset consisting of 36,848 4Kx4K-pixel image tiles (about 1.8TB uncompressed) in less than 4 minutes (150 tiles/second) on 100 nodes of a state-of-the-art hybrid cluster system.

在 CPU-GPU 集群平台上高通量分析大型显微镜图像数据集。
分析大型病理图像数据集为研究疾病形态提供了重要机会,但分析管道的资源要求限制了此类研究的规模。受一项脑癌研究的启发,我们提出并评估了一种并行图像分析应用管道,用于在分布式 CPU-GPU 平台上高吞吐量计算大型高分辨率病理组织图像数据集。为了在这些混合系统上实现高效执行,我们构建了运行时支持,使我们能够将癌症图像分析应用表达为分层数据处理流水线。该应用以粗粒度流水线的形式实现,每个阶段可进一步划分为另一个细粒度操作流水线。在 CPU 和 GPU 上使用性能感知调度技术和多项优化技术对细粒度操作进行有效管理和调度,包括架构感知进程放置、数据位置感知任务分配、数据预取和异步数据复制。采用这些优化技术是为了最大限度地利用 CPU 和 GPU 的总计算能力,并最大限度地减少数据拷贝开销。我们的实验评估表明,CPU 和 GPU 的协同使用比仅使用 GPU 的版本有显著提高(高达 1.6 倍),而且将应用程序作为一组细粒度操作来执行可为运行时优化提供更多机会,并比其他工作中使用的粗粒度、单片式实现获得更好的性能。使用运行时支持的癌症图像分析流水线实现能够在最先进的混合集群系统的 100 个节点上,以不到 4 分钟(150 张/秒)的时间处理由 36,848 张 4Kx4K 像素图像(未压缩约 1.8 TB)组成的图像数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信