The Challenge of Scaling Genome Big Data Analysis Software on TH-2 Supercomputer

Shaoliang Peng, Xiangke Liao, Canqun Yang, Yutong Lu, Jie Liu, Yingbo Cui, Heng Wang, Chengkun Wu, Bingqiang Wang
{"title":"The Challenge of Scaling Genome Big Data Analysis Software on TH-2 Supercomputer","authors":"Shaoliang Peng, Xiangke Liao, Canqun Yang, Yutong Lu, Jie Liu, Yingbo Cui, Heng Wang, Chengkun Wu, Bingqiang Wang","doi":"10.1109/CCGrid.2015.46","DOIUrl":null,"url":null,"abstract":"Whole genome re-sequencing plays a crucial role in biomedical studies. The emergence of genomic big data calls for an enormous amount of computing power. However, current computational methods are inefficient in utilizing available computational resources. In this paper, we address this challenge by optimizing the utilization of the fastest supercomputer in the world - TH-2 supercomputer. TH-2 is featured by its neo-heterogeneous architecture, in which each compute node is equipped with 2 Intel Xeon CPUs and 3 Intel Xeon Phi coprocessors. The heterogeneity and the massive amount of data to be processed pose great challenges for the deployment of the genome analysis software pipeline on TH-2. Runtime profiling shows that SOAP3-dp and SOAPsnp are the most time-consuming components (up to 70% of total runtime) in a typical genome-analyzing pipeline. To optimize the whole pipeline, we first devise a number of parallel and optimization strategies for SOAP3-dp and SOAPsnp, respectively targeting each node to fully utilize all sorts of hardware resources provided both by CPU and MIC. We also employ a few scaling methods to reduce communication between different nodes. We then scaled up our method on TH-2. With 8192 nodes, the whole analyzing procedure took 8.37 hours to finish the analysis of a 300 TB dataset of whole genome sequences from 2,000 human beings, which can take as long as 8 months on a commodity server. The speedup is about 700x.","PeriodicalId":6664,"journal":{"name":"2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing","volume":"8 1","pages":"823-828"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGrid.2015.46","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Whole genome re-sequencing plays a crucial role in biomedical studies. The emergence of genomic big data calls for an enormous amount of computing power. However, current computational methods are inefficient in utilizing available computational resources. In this paper, we address this challenge by optimizing the utilization of the fastest supercomputer in the world - TH-2 supercomputer. TH-2 is featured by its neo-heterogeneous architecture, in which each compute node is equipped with 2 Intel Xeon CPUs and 3 Intel Xeon Phi coprocessors. The heterogeneity and the massive amount of data to be processed pose great challenges for the deployment of the genome analysis software pipeline on TH-2. Runtime profiling shows that SOAP3-dp and SOAPsnp are the most time-consuming components (up to 70% of total runtime) in a typical genome-analyzing pipeline. To optimize the whole pipeline, we first devise a number of parallel and optimization strategies for SOAP3-dp and SOAPsnp, respectively targeting each node to fully utilize all sorts of hardware resources provided both by CPU and MIC. We also employ a few scaling methods to reduce communication between different nodes. We then scaled up our method on TH-2. With 8192 nodes, the whole analyzing procedure took 8.37 hours to finish the analysis of a 300 TB dataset of whole genome sequences from 2,000 human beings, which can take as long as 8 months on a commodity server. The speedup is about 700x.
基因组大数据分析软件在TH-2超级计算机上的扩展挑战
全基因组重测序在生物医学研究中起着至关重要的作用。基因组大数据的出现需要巨大的计算能力。然而,目前的计算方法在利用可用的计算资源方面效率低下。在本文中,我们通过优化世界上最快的超级计算机- TH-2超级计算机的利用率来解决这一挑战。TH-2的特点是采用新异构架构,每个计算节点配备2个Intel Xeon cpu和3个Intel Xeon Phi协处理器。TH-2基因组分析软件管线的部署面临着异质性和海量数据处理的巨大挑战。运行时分析显示,在典型的基因组分析管道中,SOAP3-dp和SOAPsnp是最耗时的组件(占总运行时的70%)。为了优化整个管道,我们首先为SOAP3-dp和SOAPsnp设计了许多并行和优化策略,分别针对每个节点充分利用CPU和MIC提供的各种硬件资源。我们还采用了一些扩展方法来减少不同节点之间的通信。然后我们在TH-2上扩展了我们的方法。在8192个节点的情况下,整个分析过程需要8.37小时才能完成对来自2000人的300 TB全基因组序列数据集的分析,在商用服务器上可能需要8个月的时间。加速大约是700倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信