The virtualized MPTCP proxy performance in cellular network

Sunyoung Chung, Seonghoon Moon, Songkuk Kim
{"title":"The virtualized MPTCP proxy performance in cellular network","authors":"Sunyoung Chung, Seonghoon Moon, Songkuk Kim","doi":"10.1109/ICUFN.2017.7993881","DOIUrl":null,"url":null,"abstract":"For massive traffic handling in the cellular network, network function virtualization (NFV) is considered to be the most cost-efficient solution in the 5G networks. Since NFV decouples the network function from the underlying hardware, the purpose-built machines can be replaced by the commodity hardware. However, NFV might suffer from the very fact that it is a solely software-based solution. The objective of this paper is to find out the NFV performance issue in cellular network. Also, we want to investigate whether NFV is comparable with MPTCP connections. Since not many servers are MPTCP-enabled, a SOCKS proxy is usually deployed in between to enable MPTCP connections. We regarded a virtualized proxy as an NFV instance and set up two types of virtualized SOCKS proxies, one as KVM and the other as docker. We also tried to find out if there is a performance difference between hypervisor-based and container-based virtualization in our setting. As the results show, the docker proxy performs better than the KVM proxy. In terms of resource consumption, for example, the docker utilized 31.9% of host CPU, whereas the KVM consumed 36.9% when both of them handling 2,000 concurrent requests. The throughput comparison of different TCP connections reflects the characteristics of MPTCP flow that performs best in a long and large flow. The latency between the server and the proxy determined the throughput of MPTCP with a virtualized proxy. If the latency between the server and the proxy gets larger (RTT 100ms), the MPTCP proxy throughput of all three different flow got worse than the single TCP connections, whether it is a short flow (1KB) or a long flow (164MB). However, if the latency is in the middle range (RTT 50ms), the MPTCP proxy throughput of a short (1KB) and medium (900KB) flow works poorly, but a long flow (164MB) still works better than the single TCP connections.","PeriodicalId":284480,"journal":{"name":"2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICUFN.2017.7993881","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

For massive traffic handling in the cellular network, network function virtualization (NFV) is considered to be the most cost-efficient solution in the 5G networks. Since NFV decouples the network function from the underlying hardware, the purpose-built machines can be replaced by the commodity hardware. However, NFV might suffer from the very fact that it is a solely software-based solution. The objective of this paper is to find out the NFV performance issue in cellular network. Also, we want to investigate whether NFV is comparable with MPTCP connections. Since not many servers are MPTCP-enabled, a SOCKS proxy is usually deployed in between to enable MPTCP connections. We regarded a virtualized proxy as an NFV instance and set up two types of virtualized SOCKS proxies, one as KVM and the other as docker. We also tried to find out if there is a performance difference between hypervisor-based and container-based virtualization in our setting. As the results show, the docker proxy performs better than the KVM proxy. In terms of resource consumption, for example, the docker utilized 31.9% of host CPU, whereas the KVM consumed 36.9% when both of them handling 2,000 concurrent requests. The throughput comparison of different TCP connections reflects the characteristics of MPTCP flow that performs best in a long and large flow. The latency between the server and the proxy determined the throughput of MPTCP with a virtualized proxy. If the latency between the server and the proxy gets larger (RTT 100ms), the MPTCP proxy throughput of all three different flow got worse than the single TCP connections, whether it is a short flow (1KB) or a long flow (164MB). However, if the latency is in the middle range (RTT 50ms), the MPTCP proxy throughput of a short (1KB) and medium (900KB) flow works poorly, but a long flow (164MB) still works better than the single TCP connections.
蜂窝网络中虚拟化MPTCP代理性能研究
对于蜂窝网络中的大量流量处理,网络功能虚拟化(NFV)被认为是5G网络中最具成本效益的解决方案。由于NFV将网络功能与底层硬件解耦,因此专用机器可以被商用硬件取代。然而,NFV可能会因为它是一个完全基于软件的解决方案而受到影响。本文的目的是找出蜂窝网络中NFV的性能问题。此外,我们还想研究NFV是否与MPTCP连接具有可比性。由于启用MPTCP的服务器不多,因此通常在两者之间部署SOCKS代理以启用MPTCP连接。我们将虚拟化代理视为NFV实例,并设置了两种类型的虚拟化SOCKS代理,一种作为KVM,另一种作为docker。我们还试图找出在我们的设置中,基于管理程序的虚拟化和基于容器的虚拟化之间是否存在性能差异。结果表明,docker代理的性能优于KVM代理。在资源消耗方面,例如,在处理2,000个并发请求时,docker使用了31.9%的主机CPU,而KVM使用了36.9%。通过对不同TCP连接的吞吐量进行比较,可以看出MPTCP流在长流量、大流量中表现最好的特点。服务器和代理之间的延迟决定了使用虚拟代理的MPTCP的吞吐量。如果服务器和代理之间的延迟变大(RTT 100ms),那么所有三种不同流的MPTCP代理吞吐量都比单个TCP连接差,无论是短流(1KB)还是长流(164MB)。但是,如果延迟处于中等范围(RTT 50ms),则短流(1KB)和中流(900KB)的MPTCP代理吞吐量较差,但长流(164MB)的MPTCP代理吞吐量仍然优于单个TCP连接。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信