{"title":"Performance Evaluation of GraphCore IPU-M2000 Accelerator for Text Detection Application","authors":"Nupur Sumeet, Karan Rawat, M. Nambiar","doi":"10.1145/3491204.3527469","DOIUrl":null,"url":null,"abstract":"The large compute load and memory footprint of modern deep neural networks motivates the use of accelerators for high through- put deployments in application spanning multiple domains. In this paper, we evaluate throughput capabilities of a comparatively new hardware from Graphcore, IPU-M2000 that supports massive par- allelism and in-memory compute. For a text detection model, we measured the throughput and power variations with batch size. We also evaluate compressed versions of this model and analyze perfor- mance variation with model precision. Additionally, we compare IPU (Intelligence Processing Unit) results with state-of-the-art GPU and FPGA deployments of a compute intensive text region detec- tion application. Our experiments suggest, IPU supports superior throughput, 27×, 1.89×, and 1.56× as compared to CPU, FPGA DPU and A100 GPU, respectively for text detection application.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3491204.3527469","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The large compute load and memory footprint of modern deep neural networks motivates the use of accelerators for high through- put deployments in application spanning multiple domains. In this paper, we evaluate throughput capabilities of a comparatively new hardware from Graphcore, IPU-M2000 that supports massive par- allelism and in-memory compute. For a text detection model, we measured the throughput and power variations with batch size. We also evaluate compressed versions of this model and analyze perfor- mance variation with model precision. Additionally, we compare IPU (Intelligence Processing Unit) results with state-of-the-art GPU and FPGA deployments of a compute intensive text region detec- tion application. Our experiments suggest, IPU supports superior throughput, 27×, 1.89×, and 1.56× as compared to CPU, FPGA DPU and A100 GPU, respectively for text detection application.