{"title":"NOC characteristics of cloud applications","authors":"P. Lotfi-Kamran, M. Modarressi, H. Sarbazi-Azad","doi":"10.1109/CADS.2017.8310674","DOIUrl":null,"url":null,"abstract":"Cloud applications have abundant request-level parallelism, and as a result, many-core server processors are good candidates for their execution. A key component in a many-core processor is the network-on-chip (NOC) that connects cores to cache banks and memory, and acts as the medium for delivering instructions and data to the cores. While cloud applications are an important class of massively-parallel workloads that benefit from many-core processors and networks-on-chip, there is no comprehensive study for the NOC requirements of these workloads. In this work, we use full-system simulation and a set of cloud applications to study the characteristics and requirements of these applications with respect to networks-on-chip. We find that NOC latency is the most important optimization criterion for these workloads. As NOC traffic of these workloads is relatively low and approximately follows uniform traffic, we find that knobs like routing algorithm and buffer size that mostly affect NOC bandwidth, beyond a certain point, have little impact on the performance of these workloads. On the other hand, techniques that reduce NOC latency directly improve the performance of cloud applications.","PeriodicalId":321346,"journal":{"name":"2017 19th International Symposium on Computer Architecture and Digital Systems (CADS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 19th International Symposium on Computer Architecture and Digital Systems (CADS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CADS.2017.8310674","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Cloud applications have abundant request-level parallelism, and as a result, many-core server processors are good candidates for their execution. A key component in a many-core processor is the network-on-chip (NOC) that connects cores to cache banks and memory, and acts as the medium for delivering instructions and data to the cores. While cloud applications are an important class of massively-parallel workloads that benefit from many-core processors and networks-on-chip, there is no comprehensive study for the NOC requirements of these workloads. In this work, we use full-system simulation and a set of cloud applications to study the characteristics and requirements of these applications with respect to networks-on-chip. We find that NOC latency is the most important optimization criterion for these workloads. As NOC traffic of these workloads is relatively low and approximately follows uniform traffic, we find that knobs like routing algorithm and buffer size that mostly affect NOC bandwidth, beyond a certain point, have little impact on the performance of these workloads. On the other hand, techniques that reduce NOC latency directly improve the performance of cloud applications.