I. Stroia, L. Itu, C. Nita, Laszlo Lazar, C. Suciu
{"title":"GPU加速几何多重网格方法:在最新NVIDIA架构上的性能比较","authors":"I. Stroia, L. Itu, C. Nita, Laszlo Lazar, C. Suciu","doi":"10.1109/ICSTCC.2015.7321289","DOIUrl":null,"url":null,"abstract":"During the past decade Graphics Processing Units (GPU) have been increasingly employed for speeding up compute intensive scientific applications. In this field, the geometric multigrid method (GMG) is one of the most efficient algorithms for solving large sparse linear systems of equations. Herein we analyze the performance of an optimized GPU based implementation of the GMG method on different state-of-the-art NVIDIA GPUs. The GTX Titan Black card, set-up with increased double precision performance leads to the smallest execution time. It is marginally faster than the more recently released GTX Titan X card which has considerably lower double precision performance. Moreover, an energy efficiency analysis reveals that the GTX 660M and the more powerful Titan cards require a similar amount of energy for running the GMG algorithm: the larger execution time is compensated by the lower power consumption.","PeriodicalId":257135,"journal":{"name":"2015 19th International Conference on System Theory, Control and Computing (ICSTCC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"GPU accelerated geometric multigrid method: Performance comparison on recent NVIDIA architectures\",\"authors\":\"I. Stroia, L. Itu, C. Nita, Laszlo Lazar, C. Suciu\",\"doi\":\"10.1109/ICSTCC.2015.7321289\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"During the past decade Graphics Processing Units (GPU) have been increasingly employed for speeding up compute intensive scientific applications. In this field, the geometric multigrid method (GMG) is one of the most efficient algorithms for solving large sparse linear systems of equations. Herein we analyze the performance of an optimized GPU based implementation of the GMG method on different state-of-the-art NVIDIA GPUs. The GTX Titan Black card, set-up with increased double precision performance leads to the smallest execution time. It is marginally faster than the more recently released GTX Titan X card which has considerably lower double precision performance. Moreover, an energy efficiency analysis reveals that the GTX 660M and the more powerful Titan cards require a similar amount of energy for running the GMG algorithm: the larger execution time is compensated by the lower power consumption.\",\"PeriodicalId\":257135,\"journal\":{\"name\":\"2015 19th International Conference on System Theory, Control and Computing (ICSTCC)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-11-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 19th International Conference on System Theory, Control and Computing (ICSTCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSTCC.2015.7321289\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 19th International Conference on System Theory, Control and Computing (ICSTCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSTCC.2015.7321289","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
During the past decade Graphics Processing Units (GPU) have been increasingly employed for speeding up compute intensive scientific applications. In this field, the geometric multigrid method (GMG) is one of the most efficient algorithms for solving large sparse linear systems of equations. Herein we analyze the performance of an optimized GPU based implementation of the GMG method on different state-of-the-art NVIDIA GPUs. The GTX Titan Black card, set-up with increased double precision performance leads to the smallest execution time. It is marginally faster than the more recently released GTX Titan X card which has considerably lower double precision performance. Moreover, an energy efficiency analysis reveals that the GTX 660M and the more powerful Titan cards require a similar amount of energy for running the GMG algorithm: the larger execution time is compensated by the lower power consumption.