Chuwen Zhang;Yong Feng;Haoyu Song;Ying Wan;Wenquan Xu;Bin Liu
{"title":"OBMA: Scalable Route Lookups With Fast and Zero-Interrupt Updates","authors":"Chuwen Zhang;Yong Feng;Haoyu Song;Ying Wan;Wenquan Xu;Bin Liu","doi":"10.1109/TNET.2024.3446689","DOIUrl":null,"url":null,"abstract":"Software-based IP route lookup is a key component for packet forwarding in Software Defined Networks. Running lookup algorithms on commodity CPUs is flexible and scalable, which shows advantages on cost and power consumption over the hardware-based forwarding engines. However, dynamic network functions and services make route updates more frequent than ever. Existing algorithms often fall short of the incremental update requirements. In this paper, we propose the Overlay BitMap Algorithm (OBMA), which contains several variations, to support extraordinary update performance while maintaining the highest-in-class lookup speed and storage efficiency. Starting from the basic OBMA_B, we develop two variations with different tradeoffs for different application scenarios. OBMA_L supports faster lookups than OBMA_B at a small cost of update speed. OBMA_S achieves better storage efficiency than OBMA_B at a small cost of lookup throughput. We run our algorithms on a commodity CPU and evaluate them with real-world route tables and traces. The experiments show that OBMA achieves the lowest memory footprint, the highest update speed, and over 200 Mpps lookup throughput. Specifically, OBMA_S reduces the memory footprint to 3.98 bytes/prefix which is 25.33% smaller that of the state-of-the-art Poptrie; OBMA_L supports 252.02 Mpps lookup throughput with a single thread, and more than 600 Mpps with multiple parallel threads in a single CPU, significantly outperforming the state-of-the-art Poptrie and SAIL; OBMA_B supports updates at a rate of 14.58M updates/s which is 15 times faster than Poptrie. The tests show that the update process has little interference with the lookup process for OBMA, and achieves zero-interrupt to lookups with multiple threads.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"4842-4854"},"PeriodicalIF":3.0000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE/ACM Transactions on Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10714022/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Software-based IP route lookup is a key component for packet forwarding in Software Defined Networks. Running lookup algorithms on commodity CPUs is flexible and scalable, which shows advantages on cost and power consumption over the hardware-based forwarding engines. However, dynamic network functions and services make route updates more frequent than ever. Existing algorithms often fall short of the incremental update requirements. In this paper, we propose the Overlay BitMap Algorithm (OBMA), which contains several variations, to support extraordinary update performance while maintaining the highest-in-class lookup speed and storage efficiency. Starting from the basic OBMA_B, we develop two variations with different tradeoffs for different application scenarios. OBMA_L supports faster lookups than OBMA_B at a small cost of update speed. OBMA_S achieves better storage efficiency than OBMA_B at a small cost of lookup throughput. We run our algorithms on a commodity CPU and evaluate them with real-world route tables and traces. The experiments show that OBMA achieves the lowest memory footprint, the highest update speed, and over 200 Mpps lookup throughput. Specifically, OBMA_S reduces the memory footprint to 3.98 bytes/prefix which is 25.33% smaller that of the state-of-the-art Poptrie; OBMA_L supports 252.02 Mpps lookup throughput with a single thread, and more than 600 Mpps with multiple parallel threads in a single CPU, significantly outperforming the state-of-the-art Poptrie and SAIL; OBMA_B supports updates at a rate of 14.58M updates/s which is 15 times faster than Poptrie. The tests show that the update process has little interference with the lookup process for OBMA, and achieves zero-interrupt to lookups with multiple threads.
期刊介绍:
The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking, covering all sorts of information transport networks over all sorts of physical layer technologies, both wireline (all kinds of guided media: e.g., copper, optical) and wireless (e.g., radio-frequency, acoustic (e.g., underwater), infra-red), or hybrids of these. The journal welcomes applied contributions reporting on novel experiences and experiments with actual systems.