{"title":"启用下一代可扩展集群","authors":"W. Gropp","doi":"10.1109/CCGRID.2010.135","DOIUrl":null,"url":null,"abstract":"Clusters revolutionized computing by making supercomputer capabilities widely available. But one of the main drivers of that revolution, the rapid doubling of processor clock rates, ran out of steam several years ago. To maintain (or even increase) the historic rate of improvement in computing power, processor designs are rapidly increasing parallelism at all levels, including more functional units, more cores, and ways to share resources among threads. Heterogeneous designs that use more specialized processors such as GPGPUs are becoming common. The scale of high-end systems is also getting larger, with 1000-core systems becoming commonplace and systems with over 300,000 cores planned for 2011. However, the software and algorithms for these systems are still basically the same as when the cluster revolution began. Drawing on experiences with the sustained PetaFLOPS system, called Blue Waters, to be installed at Illinois in 2011, and with exploratory work into Exascale system designs, this talk will discuss some of the challenges facing the cluster community as scalability becomes increasingly important and reviews some of the developments in algorithms, programming models, and software frameworks that must complement the evolution of cluster hardware.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enabling the Next Generation of Scalable Clusters\",\"authors\":\"W. Gropp\",\"doi\":\"10.1109/CCGRID.2010.135\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Clusters revolutionized computing by making supercomputer capabilities widely available. But one of the main drivers of that revolution, the rapid doubling of processor clock rates, ran out of steam several years ago. To maintain (or even increase) the historic rate of improvement in computing power, processor designs are rapidly increasing parallelism at all levels, including more functional units, more cores, and ways to share resources among threads. Heterogeneous designs that use more specialized processors such as GPGPUs are becoming common. The scale of high-end systems is also getting larger, with 1000-core systems becoming commonplace and systems with over 300,000 cores planned for 2011. However, the software and algorithms for these systems are still basically the same as when the cluster revolution began. Drawing on experiences with the sustained PetaFLOPS system, called Blue Waters, to be installed at Illinois in 2011, and with exploratory work into Exascale system designs, this talk will discuss some of the challenges facing the cluster community as scalability becomes increasingly important and reviews some of the developments in algorithms, programming models, and software frameworks that must complement the evolution of cluster hardware.\",\"PeriodicalId\":444485,\"journal\":{\"name\":\"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-05-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCGRID.2010.135\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGRID.2010.135","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Clusters revolutionized computing by making supercomputer capabilities widely available. But one of the main drivers of that revolution, the rapid doubling of processor clock rates, ran out of steam several years ago. To maintain (or even increase) the historic rate of improvement in computing power, processor designs are rapidly increasing parallelism at all levels, including more functional units, more cores, and ways to share resources among threads. Heterogeneous designs that use more specialized processors such as GPGPUs are becoming common. The scale of high-end systems is also getting larger, with 1000-core systems becoming commonplace and systems with over 300,000 cores planned for 2011. However, the software and algorithms for these systems are still basically the same as when the cluster revolution began. Drawing on experiences with the sustained PetaFLOPS system, called Blue Waters, to be installed at Illinois in 2011, and with exploratory work into Exascale system designs, this talk will discuss some of the challenges facing the cluster community as scalability becomes increasingly important and reviews some of the developments in algorithms, programming models, and software frameworks that must complement the evolution of cluster hardware.