了解未来多核服务器上I/ o密集型应用程序的可伸缩性和性能需求

Shoaib Akram, M. Marazakis, A. Bilas
{"title":"了解未来多核服务器上I/ o密集型应用程序的可伸缩性和性能需求","authors":"Shoaib Akram, M. Marazakis, A. Bilas","doi":"10.1109/MASCOTS.2012.29","DOIUrl":null,"url":null,"abstract":"Today, there is increased interest in understanding the impact of data-centric applications on compute and storage infrastructures as datasets are projected to grow dramatically. In this paper, we examine the storage I/O behavior of twelve data-centric applications as the number of cores per server grows. We configure these applications with realistic datasets and examine configuration points where they perform significant amount of I/O. We propose using cycles per I/O (cpio) as a metric for abstracting many I/O subsystem configuration details. We analyze specific architectural issues pertaining to data-centric applications including the usefulness of hyperthreading, sensitivity to memory bandwidth, and the potential impact of disruptive storage technologies. Our results show that today's data-centric applications are not able to scale with the number of cores: moving from one to eight cores, results in 0% to 400% more cycles per I/O operation. These applications can achieve much of their performance with only 50% of the memory bandwidth available on modern processors. Hyper-threading is extremely effective for these applications and, on average, applications suffer only a 15% reduction in performance when hyper-threading is used instead of full cores. Further, DRAM-type persistent memory has the potential to solve scalability bottlenecks by reducing or eliminating idle and I/O completion periods and improving server utilization. We use a detailed methodology to project that in the year 2020, at 4096 processors, servers will require between 250-500 GB/s under optimistic scaling assumptions. We show that if the current trend in application scalability is not reversed, we will need about 2.5M servers that will consume 10 BKWh of energy to do a single pass over the projected 35 Zeta Bytes of data in 2020.","PeriodicalId":278764,"journal":{"name":"2012 IEEE 20th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Understanding Scalability and Performance Requirements of I/O-Intensive Applications on Future Multicore Servers\",\"authors\":\"Shoaib Akram, M. Marazakis, A. Bilas\",\"doi\":\"10.1109/MASCOTS.2012.29\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Today, there is increased interest in understanding the impact of data-centric applications on compute and storage infrastructures as datasets are projected to grow dramatically. In this paper, we examine the storage I/O behavior of twelve data-centric applications as the number of cores per server grows. We configure these applications with realistic datasets and examine configuration points where they perform significant amount of I/O. We propose using cycles per I/O (cpio) as a metric for abstracting many I/O subsystem configuration details. We analyze specific architectural issues pertaining to data-centric applications including the usefulness of hyperthreading, sensitivity to memory bandwidth, and the potential impact of disruptive storage technologies. Our results show that today's data-centric applications are not able to scale with the number of cores: moving from one to eight cores, results in 0% to 400% more cycles per I/O operation. These applications can achieve much of their performance with only 50% of the memory bandwidth available on modern processors. Hyper-threading is extremely effective for these applications and, on average, applications suffer only a 15% reduction in performance when hyper-threading is used instead of full cores. Further, DRAM-type persistent memory has the potential to solve scalability bottlenecks by reducing or eliminating idle and I/O completion periods and improving server utilization. We use a detailed methodology to project that in the year 2020, at 4096 processors, servers will require between 250-500 GB/s under optimistic scaling assumptions. We show that if the current trend in application scalability is not reversed, we will need about 2.5M servers that will consume 10 BKWh of energy to do a single pass over the projected 35 Zeta Bytes of data in 2020.\",\"PeriodicalId\":278764,\"journal\":{\"name\":\"2012 IEEE 20th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE 20th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MASCOTS.2012.29\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 20th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MASCOTS.2012.29","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

如今,随着数据集预计将急剧增长,人们对理解以数据为中心的应用程序对计算和存储基础设施的影响越来越感兴趣。在本文中,我们研究了12个以数据为中心的应用程序的存储I/O行为,每台服务器的核心数量增加。我们使用实际数据集配置这些应用程序,并检查它们执行大量I/O的配置点。我们建议使用每I/O周期(cpio)作为抽象许多I/O子系统配置细节的度量。我们分析了与以数据为中心的应用程序相关的特定架构问题,包括超线程的有用性、对内存带宽的敏感性以及破坏性存储技术的潜在影响。我们的结果表明,当今以数据为中心的应用程序无法随着内核数量的增加而扩展:从1个内核移动到8个内核,每个I/O操作的周期增加0%到400%。这些应用程序可以在只有现代处理器50%可用内存带宽的情况下实现大部分性能。超线程对这些应用程序非常有效,平均而言,当使用超线程而不是全核时,应用程序的性能只会下降15%。此外,通过减少或消除空闲和I/O完成周期以及提高服务器利用率,dram类型的持久内存有可能解决可伸缩性瓶颈。我们使用详细的方法来预测,在乐观的扩展假设下,到2020年,在4096个处理器的情况下,服务器将需要250-500 GB/s。我们表明,如果目前应用程序可扩展性的趋势没有逆转,我们将需要大约250万台服务器,这将消耗10 BKWh的能量来完成2020年预计的35 Zeta字节的数据的单次传递。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Understanding Scalability and Performance Requirements of I/O-Intensive Applications on Future Multicore Servers
Today, there is increased interest in understanding the impact of data-centric applications on compute and storage infrastructures as datasets are projected to grow dramatically. In this paper, we examine the storage I/O behavior of twelve data-centric applications as the number of cores per server grows. We configure these applications with realistic datasets and examine configuration points where they perform significant amount of I/O. We propose using cycles per I/O (cpio) as a metric for abstracting many I/O subsystem configuration details. We analyze specific architectural issues pertaining to data-centric applications including the usefulness of hyperthreading, sensitivity to memory bandwidth, and the potential impact of disruptive storage technologies. Our results show that today's data-centric applications are not able to scale with the number of cores: moving from one to eight cores, results in 0% to 400% more cycles per I/O operation. These applications can achieve much of their performance with only 50% of the memory bandwidth available on modern processors. Hyper-threading is extremely effective for these applications and, on average, applications suffer only a 15% reduction in performance when hyper-threading is used instead of full cores. Further, DRAM-type persistent memory has the potential to solve scalability bottlenecks by reducing or eliminating idle and I/O completion periods and improving server utilization. We use a detailed methodology to project that in the year 2020, at 4096 processors, servers will require between 250-500 GB/s under optimistic scaling assumptions. We show that if the current trend in application scalability is not reversed, we will need about 2.5M servers that will consume 10 BKWh of energy to do a single pass over the projected 35 Zeta Bytes of data in 2020.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信