{"title":"重新考虑即将推出的ExaScale系统的协同调度","authors":"Stefan Lankes","doi":"10.1109/HPCSim.2015.7237117","DOIUrl":null,"url":null,"abstract":"Future generation supercomputers will be a hundred times faster than today's leaders of the Top 500 while reaching the exascale mark. It is predicted that this performance gain in terms of CPU power will be achieved by a shift in the ratio of compute nodes to cores per node. The amount of nodes will not grow significantly compared to today's systems, instead they will be built by using many-core CPUs holding more than hundreds of cores resulting in a widening gap between compute power and I/O performance [1]. Four key challenges of future exascale systems have been identified by previous studies that must be coped with when designing them: energy and power, memory and storage, concurrency and locality, and resiliency [2].","PeriodicalId":134009,"journal":{"name":"2015 International Conference on High Performance Computing & Simulation (HPCS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Revisiting co-scheduling for upcoming ExaScale systems\",\"authors\":\"Stefan Lankes\",\"doi\":\"10.1109/HPCSim.2015.7237117\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Future generation supercomputers will be a hundred times faster than today's leaders of the Top 500 while reaching the exascale mark. It is predicted that this performance gain in terms of CPU power will be achieved by a shift in the ratio of compute nodes to cores per node. The amount of nodes will not grow significantly compared to today's systems, instead they will be built by using many-core CPUs holding more than hundreds of cores resulting in a widening gap between compute power and I/O performance [1]. Four key challenges of future exascale systems have been identified by previous studies that must be coped with when designing them: energy and power, memory and storage, concurrency and locality, and resiliency [2].\",\"PeriodicalId\":134009,\"journal\":{\"name\":\"2015 International Conference on High Performance Computing & Simulation (HPCS)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-07-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Conference on High Performance Computing & Simulation (HPCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPCSim.2015.7237117\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on High Performance Computing & Simulation (HPCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCSim.2015.7237117","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Revisiting co-scheduling for upcoming ExaScale systems
Future generation supercomputers will be a hundred times faster than today's leaders of the Top 500 while reaching the exascale mark. It is predicted that this performance gain in terms of CPU power will be achieved by a shift in the ratio of compute nodes to cores per node. The amount of nodes will not grow significantly compared to today's systems, instead they will be built by using many-core CPUs holding more than hundreds of cores resulting in a widening gap between compute power and I/O performance [1]. Four key challenges of future exascale systems have been identified by previous studies that must be coped with when designing them: energy and power, memory and storage, concurrency and locality, and resiliency [2].