Online Distributed Job Dispatching with Outdated and Partially-Observable Information

Yuncong Hong, B. Lv, Rui Wang, Haisheng Tan, Zhenhua Han, Hao Zhou, F. Lau
{"title":"Online Distributed Job Dispatching with Outdated and Partially-Observable Information","authors":"Yuncong Hong, B. Lv, Rui Wang, Haisheng Tan, Zhenhua Han, Hao Zhou, F. Lau","doi":"10.1109/MSN50589.2020.00059","DOIUrl":null,"url":null,"abstract":"In this paper, we investigate online distributed job dispatching in an edge computing system residing in a Metropolitan Area Network (MAN). Specifically, job dispatchers are implemented on access points (APs) which collect jobs from mobile users and distribute each job to a server at the edge or the cloud. A signaling mechanism with periodic broadcast is introduced to facilitate cooperation among APs. The transmission latency is non-negligible in MAN, which leads to outdated information sharing among APs. Moreover, the fully-observed system state is discouraged as reception of all broadcast is time consuming. Therefore, we formulate the distributed optimization of job dispatching strategies among the APs as a Markov decision process with partial and outdated system state, i.e., partially observable Markov Decision Process (POMDP). The conventional solution for POMDP is impractical due to huge time complexity. We propose a novel low-complexity solution framework for distributed job dispatching, based on which the optimization of job dispatching policy can be decoupled via an alternative policy iteration algorithm, so that the distributed policy iteration of each AP can be made according to partial and outdated observation. A theoretical performance lower bound is proved for our approximate MDP solution. Furthermore, we conduct extensive simulations based on the Google Cluster trace. The evaluation results show that our policy can achieve as high as 20.67% reduction in average job response time compared with heuristic baselines, and our algorithm consistently performs well under various parameter settings.","PeriodicalId":447605,"journal":{"name":"2020 16th International Conference on Mobility, Sensing and Networking (MSN)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 16th International Conference on Mobility, Sensing and Networking (MSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MSN50589.2020.00059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we investigate online distributed job dispatching in an edge computing system residing in a Metropolitan Area Network (MAN). Specifically, job dispatchers are implemented on access points (APs) which collect jobs from mobile users and distribute each job to a server at the edge or the cloud. A signaling mechanism with periodic broadcast is introduced to facilitate cooperation among APs. The transmission latency is non-negligible in MAN, which leads to outdated information sharing among APs. Moreover, the fully-observed system state is discouraged as reception of all broadcast is time consuming. Therefore, we formulate the distributed optimization of job dispatching strategies among the APs as a Markov decision process with partial and outdated system state, i.e., partially observable Markov Decision Process (POMDP). The conventional solution for POMDP is impractical due to huge time complexity. We propose a novel low-complexity solution framework for distributed job dispatching, based on which the optimization of job dispatching policy can be decoupled via an alternative policy iteration algorithm, so that the distributed policy iteration of each AP can be made according to partial and outdated observation. A theoretical performance lower bound is proved for our approximate MDP solution. Furthermore, we conduct extensive simulations based on the Google Cluster trace. The evaluation results show that our policy can achieve as high as 20.67% reduction in average job response time compared with heuristic baselines, and our algorithm consistently performs well under various parameter settings.
具有过时信息和部分可观察信息的在线分布式作业调度
本文研究了城域网边缘计算系统中的在线分布式作业调度问题。具体来说,作业调度器是在接入点(ap)上实现的,接入点从移动用户收集作业,并将每个作业分发到边缘或云上的服务器。引入周期性广播的信令机制,促进ap之间的合作。城域网的传输延迟不可忽略,导致ap之间的信息共享过时。此外,由于接收所有广播非常耗时,因此不鼓励完全观察系统状态。因此,我们将ap间作业调度策略的分布式优化表述为系统状态部分过时的马尔可夫决策过程,即部分可观察马尔可夫决策过程(POMDP)。由于时间复杂度大,传统的POMDP解决方案不切实际。提出了一种新的低复杂度的分布式作业调度解决方案框架,在此基础上,通过备选策略迭代算法解耦了作业调度策略的优化,使得每个AP的分布式策略迭代可以根据局部和过时的观测结果进行。证明了近似MDP解的理论性能下界。此外,我们基于Google集群跟踪进行了广泛的模拟。评估结果表明,与启发式基线相比,我们的策略可以使平均作业响应时间减少20.67%,并且我们的算法在各种参数设置下都表现良好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信