Scalability and Performance Evaluation of Edge Cloud Systems for Latency Constrained Applications

S. Maheshwari, D. Raychaudhuri, I. Seskar, F. Bronzino
{"title":"Scalability and Performance Evaluation of Edge Cloud Systems for Latency Constrained Applications","authors":"S. Maheshwari, D. Raychaudhuri, I. Seskar, F. Bronzino","doi":"10.1109/SEC.2018.00028","DOIUrl":null,"url":null,"abstract":"This paper presents an analysis of the scalability and performance of an edge cloud system designed to support latency-sensitive applications. A system model for geographically dispersed edge clouds is developed by considering an urban area such as Chicago and co-locating edge computing clusters with known Wi-Fi access point locations. The model also allows for provisioning of network bandwidth and processing resources with specified parameters in both edge and the cloud. The model can then be used to determine application response time (sum of network delay, compute queuing and compute processing time), as a function of offered load for different values of edge and core compute resources, and network bandwidth parameters. Numerical results are given for the city-scale scenario under consideration to show key system level trade-offs between edge cloud and conventional cloud computing. Alternative strategies for routing service requests to edge vs. core cloud clusters are discussed and evaluated. Key conclusions from the study are: (a) the core cloud-only system outperforms the edge-only system having low inter-edge bandwidth, (b) a distributed edge cloud selection scheme can approach the global optimal assignment when the edge has sufficient compute resources and high inter-edge bandwidth, and (c) adding capacity to an existing edge network without increasing the inter-edge bandwidth contributes to network wide congestion and can reduce system capacity.","PeriodicalId":376439,"journal":{"name":"2018 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"89","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/ACM Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEC.2018.00028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 89

Abstract

This paper presents an analysis of the scalability and performance of an edge cloud system designed to support latency-sensitive applications. A system model for geographically dispersed edge clouds is developed by considering an urban area such as Chicago and co-locating edge computing clusters with known Wi-Fi access point locations. The model also allows for provisioning of network bandwidth and processing resources with specified parameters in both edge and the cloud. The model can then be used to determine application response time (sum of network delay, compute queuing and compute processing time), as a function of offered load for different values of edge and core compute resources, and network bandwidth parameters. Numerical results are given for the city-scale scenario under consideration to show key system level trade-offs between edge cloud and conventional cloud computing. Alternative strategies for routing service requests to edge vs. core cloud clusters are discussed and evaluated. Key conclusions from the study are: (a) the core cloud-only system outperforms the edge-only system having low inter-edge bandwidth, (b) a distributed edge cloud selection scheme can approach the global optimal assignment when the edge has sufficient compute resources and high inter-edge bandwidth, and (c) adding capacity to an existing edge network without increasing the inter-edge bandwidth contributes to network wide congestion and can reduce system capacity.
延迟受限应用中边缘云系统的可扩展性和性能评估
本文分析了一个边缘云系统的可扩展性和性能,该系统旨在支持对延迟敏感的应用程序。通过考虑芝加哥等城市区域,并将边缘计算集群与已知的Wi-Fi接入点位置共同定位,开发了地理上分散的边缘云系统模型。该模型还允许在边缘和云中使用指定参数提供网络带宽和处理资源。然后,可以使用该模型确定应用程序响应时间(网络延迟、计算排队和计算处理时间的总和)作为不同边缘和核心计算资源值提供的负载以及网络带宽参数的函数。给出了考虑的城市规模场景的数值结果,以显示边缘云和传统云计算之间的关键系统级权衡。讨论和评估了将服务请求路由到边缘云集群和核心云集群的替代策略。研究的主要结论是:(a)核心纯云系统在低边缘间带宽下优于纯边缘系统;(b)分布式边缘云选择方案在边缘具有足够的计算资源和高边缘间带宽时可以接近全局最优分配;(c)在不增加边缘间带宽的情况下向现有边缘网络增加容量会导致网络范围的拥塞,并会降低系统容量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信