{"title":"CC-NUMA多处理器上缓存库的性能评价","authors":"Hung-Chang Hsiao, C. King","doi":"10.1109/ICPADS.1998.741127","DOIUrl":null,"url":null,"abstract":"Cache depot is a performance enhancement technique on cache-coherent non-uniform memory access (CC-NUMA) multiprocessors, in which nodes in the system store extra memory blocks on behalf of other nodes. In this way memory requests from a node can be satisfied by nearby depot nodes without going all the way to the home node. This not only reduces memory access latency and network traffic, but also spreads the network load more evenly. We study the design strategy for cache depot that: enhances the network interface of each node to include a depot cache, which stores those extra memory blocks for other nodes; and employs a new multicast routing scheme, which is called the multi-hop worms and works cooperatively with depot caches, to transmit coherence messages. By considering message routing and depot caches together the design concept can be applied even to those CC-NUMA systems that have a non-hierarchical, scalable interconnection network. We have developed an execution-driven simulator to evaluate the effectiveness of the design strategy. Performance results from using four SPLASH-2 benchmarks show that the design strategy improves the performance of the CC-NUMA multiprocessor by 11% to 21%. We have also studied in depth various factors which affect the performance of cache depot.","PeriodicalId":226947,"journal":{"name":"Proceedings 1998 International Conference on Parallel and Distributed Systems (Cat. No.98TB100250)","volume":"235 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Performance evaluation of cache depot on CC-NUMA multiprocessors\",\"authors\":\"Hung-Chang Hsiao, C. King\",\"doi\":\"10.1109/ICPADS.1998.741127\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cache depot is a performance enhancement technique on cache-coherent non-uniform memory access (CC-NUMA) multiprocessors, in which nodes in the system store extra memory blocks on behalf of other nodes. In this way memory requests from a node can be satisfied by nearby depot nodes without going all the way to the home node. This not only reduces memory access latency and network traffic, but also spreads the network load more evenly. We study the design strategy for cache depot that: enhances the network interface of each node to include a depot cache, which stores those extra memory blocks for other nodes; and employs a new multicast routing scheme, which is called the multi-hop worms and works cooperatively with depot caches, to transmit coherence messages. By considering message routing and depot caches together the design concept can be applied even to those CC-NUMA systems that have a non-hierarchical, scalable interconnection network. We have developed an execution-driven simulator to evaluate the effectiveness of the design strategy. Performance results from using four SPLASH-2 benchmarks show that the design strategy improves the performance of the CC-NUMA multiprocessor by 11% to 21%. We have also studied in depth various factors which affect the performance of cache depot.\",\"PeriodicalId\":226947,\"journal\":{\"name\":\"Proceedings 1998 International Conference on Parallel and Distributed Systems (Cat. No.98TB100250)\",\"volume\":\"235 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1998-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 1998 International Conference on Parallel and Distributed Systems (Cat. No.98TB100250)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPADS.1998.741127\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 1998 International Conference on Parallel and Distributed Systems (Cat. No.98TB100250)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPADS.1998.741127","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance evaluation of cache depot on CC-NUMA multiprocessors
Cache depot is a performance enhancement technique on cache-coherent non-uniform memory access (CC-NUMA) multiprocessors, in which nodes in the system store extra memory blocks on behalf of other nodes. In this way memory requests from a node can be satisfied by nearby depot nodes without going all the way to the home node. This not only reduces memory access latency and network traffic, but also spreads the network load more evenly. We study the design strategy for cache depot that: enhances the network interface of each node to include a depot cache, which stores those extra memory blocks for other nodes; and employs a new multicast routing scheme, which is called the multi-hop worms and works cooperatively with depot caches, to transmit coherence messages. By considering message routing and depot caches together the design concept can be applied even to those CC-NUMA systems that have a non-hierarchical, scalable interconnection network. We have developed an execution-driven simulator to evaluate the effectiveness of the design strategy. Performance results from using four SPLASH-2 benchmarks show that the design strategy improves the performance of the CC-NUMA multiprocessor by 11% to 21%. We have also studied in depth various factors which affect the performance of cache depot.