T. Boku, M. Sato, A. Ukawa, D. Takahashi, S. Sumimoto, K. Kumon, Takashi Moriyama, M. Shimizu
{"title":"PACS-CS:用于科学计算的大规模带宽感知PC集群","authors":"T. Boku, M. Sato, A. Ukawa, D. Takahashi, S. Sumimoto, K. Kumon, Takashi Moriyama, M. Shimizu","doi":"10.1109/CCGRID.2006.78","DOIUrl":null,"url":null,"abstract":"We have been developing a large scale PC cluster named PACS-CS (Parallel Array Computer System for Computational Sciences) at Center for Computational Sciences, University of Tsukuba, for wide variety of computational science applications such as computational physics, computational material science, computational biology, etc. We consider the most important issue on the computation node is the memory access bandwidth, then a node is equipped with a single CPU which is different from ordinary high-end PC clusters. The interconnection network for parallel processing is configured as a multi-dimensional hyper-crossbar network based on trunking of Gigabit Ethernet to support large scale scientific computation with physical space modeling. Based on the above concept, we are developing an original mother board to configure a single CPU node with 8 ports of Gigabit Ethernet, which can be implemented in the half size of 19 inch rack-mountable 1U size platform. Under the preliminary performance evaluation, we confirmed that the computation part in practical Lattice QCD code will be able to achieve 30% of peak performance, and up to 600 Mbyte/sec of bandwidth at single directed neighboring communication will be achieved. PACS-CS will start its operation on July 2006 with 2560 CPUs and 14.3 Tflops of peak performance.","PeriodicalId":419226,"journal":{"name":"Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"PACS-CS: a large-scale bandwidth-aware PC cluster for scientific computation\",\"authors\":\"T. Boku, M. Sato, A. Ukawa, D. Takahashi, S. Sumimoto, K. Kumon, Takashi Moriyama, M. Shimizu\",\"doi\":\"10.1109/CCGRID.2006.78\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We have been developing a large scale PC cluster named PACS-CS (Parallel Array Computer System for Computational Sciences) at Center for Computational Sciences, University of Tsukuba, for wide variety of computational science applications such as computational physics, computational material science, computational biology, etc. We consider the most important issue on the computation node is the memory access bandwidth, then a node is equipped with a single CPU which is different from ordinary high-end PC clusters. The interconnection network for parallel processing is configured as a multi-dimensional hyper-crossbar network based on trunking of Gigabit Ethernet to support large scale scientific computation with physical space modeling. Based on the above concept, we are developing an original mother board to configure a single CPU node with 8 ports of Gigabit Ethernet, which can be implemented in the half size of 19 inch rack-mountable 1U size platform. Under the preliminary performance evaluation, we confirmed that the computation part in practical Lattice QCD code will be able to achieve 30% of peak performance, and up to 600 Mbyte/sec of bandwidth at single directed neighboring communication will be achieved. PACS-CS will start its operation on July 2006 with 2560 CPUs and 14.3 Tflops of peak performance.\",\"PeriodicalId\":419226,\"journal\":{\"name\":\"Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-05-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCGRID.2006.78\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGRID.2006.78","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
PACS-CS: a large-scale bandwidth-aware PC cluster for scientific computation
We have been developing a large scale PC cluster named PACS-CS (Parallel Array Computer System for Computational Sciences) at Center for Computational Sciences, University of Tsukuba, for wide variety of computational science applications such as computational physics, computational material science, computational biology, etc. We consider the most important issue on the computation node is the memory access bandwidth, then a node is equipped with a single CPU which is different from ordinary high-end PC clusters. The interconnection network for parallel processing is configured as a multi-dimensional hyper-crossbar network based on trunking of Gigabit Ethernet to support large scale scientific computation with physical space modeling. Based on the above concept, we are developing an original mother board to configure a single CPU node with 8 ports of Gigabit Ethernet, which can be implemented in the half size of 19 inch rack-mountable 1U size platform. Under the preliminary performance evaluation, we confirmed that the computation part in practical Lattice QCD code will be able to achieve 30% of peak performance, and up to 600 Mbyte/sec of bandwidth at single directed neighboring communication will be achieved. PACS-CS will start its operation on July 2006 with 2560 CPUs and 14.3 Tflops of peak performance.