{"title":"具有可重构互连的异构缓存分布","authors":"Aishwariya Pattabiraman, A. Avakian, R. Vemuri","doi":"10.1109/IPDPSW.2012.31","DOIUrl":null,"url":null,"abstract":"Current trends in multicore research suggest that hundreds of cores will be integrated on a single chip in the near future for increased performance. This new trend presents a set of challenges, one of which is cache distribution among the cores. Network on chip with homogeneous cache distribution among the routers has become mainstream in literature. In this paper, we propose having a heterogeneous distribution of cache blocks to routers. The heterogeneity and the appropriate scheduling by the OS will help to reduce network hops by placing more cache blocks closer to the cores executing data intensive applications. We show that this distribution reduces cache access overhead by as much as 20% percent. Furthermore, we also propose reconfigurable heterogeneous cache architecture for multi-threaded workloads. In this scheme, cache blocks are reassigned to routers based on data needs. A constructive heuristic has been presented which gives the optimal cache configuration and page coloring for each workload. We show that this approach can effectively reduce cache access time by as much as 61% percent.","PeriodicalId":378335,"journal":{"name":"2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Heterogeneous Cache Distribution with Reconfigurable Interconnect\",\"authors\":\"Aishwariya Pattabiraman, A. Avakian, R. Vemuri\",\"doi\":\"10.1109/IPDPSW.2012.31\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current trends in multicore research suggest that hundreds of cores will be integrated on a single chip in the near future for increased performance. This new trend presents a set of challenges, one of which is cache distribution among the cores. Network on chip with homogeneous cache distribution among the routers has become mainstream in literature. In this paper, we propose having a heterogeneous distribution of cache blocks to routers. The heterogeneity and the appropriate scheduling by the OS will help to reduce network hops by placing more cache blocks closer to the cores executing data intensive applications. We show that this distribution reduces cache access overhead by as much as 20% percent. Furthermore, we also propose reconfigurable heterogeneous cache architecture for multi-threaded workloads. In this scheme, cache blocks are reassigned to routers based on data needs. A constructive heuristic has been presented which gives the optimal cache configuration and page coloring for each workload. We show that this approach can effectively reduce cache access time by as much as 61% percent.\",\"PeriodicalId\":378335,\"journal\":{\"name\":\"2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-05-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IPDPSW.2012.31\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPSW.2012.31","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Heterogeneous Cache Distribution with Reconfigurable Interconnect
Current trends in multicore research suggest that hundreds of cores will be integrated on a single chip in the near future for increased performance. This new trend presents a set of challenges, one of which is cache distribution among the cores. Network on chip with homogeneous cache distribution among the routers has become mainstream in literature. In this paper, we propose having a heterogeneous distribution of cache blocks to routers. The heterogeneity and the appropriate scheduling by the OS will help to reduce network hops by placing more cache blocks closer to the cores executing data intensive applications. We show that this distribution reduces cache access overhead by as much as 20% percent. Furthermore, we also propose reconfigurable heterogeneous cache architecture for multi-threaded workloads. In this scheme, cache blocks are reassigned to routers based on data needs. A constructive heuristic has been presented which gives the optimal cache configuration and page coloring for each workload. We show that this approach can effectively reduce cache access time by as much as 61% percent.