{"title":"用于多核体系结构的无锁、缓存效率高的共享环缓冲区","authors":"P. Lee, T. Bu, Girish P. Chandranmenon","doi":"10.1145/1882486.1882508","DOIUrl":null,"url":null,"abstract":"We propose MCRingBuffer, a lock-free, cache-efficient shared ring buffer that provides fast data accesses among threads running in multi-core architectures. MCRingBuffer seeks to reduce the cost of inter-core communication by allowing concurrent lock-free data accesses and improving the cache locality of accessing control variables used for thread synchronization. Evaluation on an Intel Xeon multi-core machine shows that MCRingBuffer achieves a throughput gain of up to 4.9x over existing concurrent lock-free ring buffers. A motivating application of MCRingBuffer is parallel network traffic monitoring, in which MCRingBuffer facilitates multi-core architectures to process packets at line rate.","PeriodicalId":329300,"journal":{"name":"Symposium on Architectures for Networking and Communications Systems","volume":"262 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":"{\"title\":\"A lock-free, cache-efficient shared ring buffer for multi-core architectures\",\"authors\":\"P. Lee, T. Bu, Girish P. Chandranmenon\",\"doi\":\"10.1145/1882486.1882508\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose MCRingBuffer, a lock-free, cache-efficient shared ring buffer that provides fast data accesses among threads running in multi-core architectures. MCRingBuffer seeks to reduce the cost of inter-core communication by allowing concurrent lock-free data accesses and improving the cache locality of accessing control variables used for thread synchronization. Evaluation on an Intel Xeon multi-core machine shows that MCRingBuffer achieves a throughput gain of up to 4.9x over existing concurrent lock-free ring buffers. A motivating application of MCRingBuffer is parallel network traffic monitoring, in which MCRingBuffer facilitates multi-core architectures to process packets at line rate.\",\"PeriodicalId\":329300,\"journal\":{\"name\":\"Symposium on Architectures for Networking and Communications Systems\",\"volume\":\"262 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"25\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Symposium on Architectures for Networking and Communications Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1882486.1882508\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Symposium on Architectures for Networking and Communications Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1882486.1882508","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A lock-free, cache-efficient shared ring buffer for multi-core architectures
We propose MCRingBuffer, a lock-free, cache-efficient shared ring buffer that provides fast data accesses among threads running in multi-core architectures. MCRingBuffer seeks to reduce the cost of inter-core communication by allowing concurrent lock-free data accesses and improving the cache locality of accessing control variables used for thread synchronization. Evaluation on an Intel Xeon multi-core machine shows that MCRingBuffer achieves a throughput gain of up to 4.9x over existing concurrent lock-free ring buffers. A motivating application of MCRingBuffer is parallel network traffic monitoring, in which MCRingBuffer facilitates multi-core architectures to process packets at line rate.