{"title":"The virtualized MPTCP proxy performance in cellular network","authors":"Sunyoung Chung, Seonghoon Moon, Songkuk Kim","doi":"10.1109/ICUFN.2017.7993881","DOIUrl":null,"url":null,"abstract":"For massive traffic handling in the cellular network, network function virtualization (NFV) is considered to be the most cost-efficient solution in the 5G networks. Since NFV decouples the network function from the underlying hardware, the purpose-built machines can be replaced by the commodity hardware. However, NFV might suffer from the very fact that it is a solely software-based solution. The objective of this paper is to find out the NFV performance issue in cellular network. Also, we want to investigate whether NFV is comparable with MPTCP connections. Since not many servers are MPTCP-enabled, a SOCKS proxy is usually deployed in between to enable MPTCP connections. We regarded a virtualized proxy as an NFV instance and set up two types of virtualized SOCKS proxies, one as KVM and the other as docker. We also tried to find out if there is a performance difference between hypervisor-based and container-based virtualization in our setting. As the results show, the docker proxy performs better than the KVM proxy. In terms of resource consumption, for example, the docker utilized 31.9% of host CPU, whereas the KVM consumed 36.9% when both of them handling 2,000 concurrent requests. The throughput comparison of different TCP connections reflects the characteristics of MPTCP flow that performs best in a long and large flow. The latency between the server and the proxy determined the throughput of MPTCP with a virtualized proxy. If the latency between the server and the proxy gets larger (RTT 100ms), the MPTCP proxy throughput of all three different flow got worse than the single TCP connections, whether it is a short flow (1KB) or a long flow (164MB). However, if the latency is in the middle range (RTT 50ms), the MPTCP proxy throughput of a short (1KB) and medium (900KB) flow works poorly, but a long flow (164MB) still works better than the single TCP connections.","PeriodicalId":284480,"journal":{"name":"2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICUFN.2017.7993881","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
For massive traffic handling in the cellular network, network function virtualization (NFV) is considered to be the most cost-efficient solution in the 5G networks. Since NFV decouples the network function from the underlying hardware, the purpose-built machines can be replaced by the commodity hardware. However, NFV might suffer from the very fact that it is a solely software-based solution. The objective of this paper is to find out the NFV performance issue in cellular network. Also, we want to investigate whether NFV is comparable with MPTCP connections. Since not many servers are MPTCP-enabled, a SOCKS proxy is usually deployed in between to enable MPTCP connections. We regarded a virtualized proxy as an NFV instance and set up two types of virtualized SOCKS proxies, one as KVM and the other as docker. We also tried to find out if there is a performance difference between hypervisor-based and container-based virtualization in our setting. As the results show, the docker proxy performs better than the KVM proxy. In terms of resource consumption, for example, the docker utilized 31.9% of host CPU, whereas the KVM consumed 36.9% when both of them handling 2,000 concurrent requests. The throughput comparison of different TCP connections reflects the characteristics of MPTCP flow that performs best in a long and large flow. The latency between the server and the proxy determined the throughput of MPTCP with a virtualized proxy. If the latency between the server and the proxy gets larger (RTT 100ms), the MPTCP proxy throughput of all three different flow got worse than the single TCP connections, whether it is a short flow (1KB) or a long flow (164MB). However, if the latency is in the middle range (RTT 50ms), the MPTCP proxy throughput of a short (1KB) and medium (900KB) flow works poorly, but a long flow (164MB) still works better than the single TCP connections.