基于深度强化学习的集群调度中的对抗性攻击

Shaojun Zhang, Chen Wang, Albert Y. Zomaya
{"title":"基于深度强化学习的集群调度中的对抗性攻击","authors":"Shaojun Zhang, Chen Wang, Albert Y. Zomaya","doi":"10.1109/MASCOTS50786.2020.9285955","DOIUrl":null,"url":null,"abstract":"A scheduler is essential for resource management in a shared computer cluster, particularly scheduling algorithms play an important role in meeting service level objectives of user applications in large scale clusters that underlie cloud computing. Traditional cluster schedulers are often based on empirical observations of patterns of jobs running on them. It is unclear how effective they are for capturing the patterns of a variety of jobs in clouds. Recent advances in Deep Reinforcement Learning (DRL) promise a new optimization framework for a scheduler to systematically address the problem. A DRL-based scheduler can extract detailed patterns from job features and the dynamics of cloud resource utilization for better scheduling decisions. However, the deep neural network models used by the scheduler might be vulnerable to adversarial attacks. There is limited research investigating the vulnerability in DRL-based schedulers. In this paper, we give a white-box attack method to show that malicious users can exploit the scheduling vulnerability to benefit certain jobs. The proposed attack method only requires minor perturbations job features to significantly change the scheduling priority of these jobs. We implement both greedy and critical path based algorithms to facilitate the attacks to a state-of-the-art DRL based scheduler called Decima. Our extensive experiments on TPC-H workloads show a 62% and 66% success rate of attacks with the two algorithms. Successful attacks achieve a 18.6% and 17.5% completion time reduction.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Adversarial Attacks in a Deep Reinforcement Learning based Cluster Scheduler\",\"authors\":\"Shaojun Zhang, Chen Wang, Albert Y. Zomaya\",\"doi\":\"10.1109/MASCOTS50786.2020.9285955\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A scheduler is essential for resource management in a shared computer cluster, particularly scheduling algorithms play an important role in meeting service level objectives of user applications in large scale clusters that underlie cloud computing. Traditional cluster schedulers are often based on empirical observations of patterns of jobs running on them. It is unclear how effective they are for capturing the patterns of a variety of jobs in clouds. Recent advances in Deep Reinforcement Learning (DRL) promise a new optimization framework for a scheduler to systematically address the problem. A DRL-based scheduler can extract detailed patterns from job features and the dynamics of cloud resource utilization for better scheduling decisions. However, the deep neural network models used by the scheduler might be vulnerable to adversarial attacks. There is limited research investigating the vulnerability in DRL-based schedulers. In this paper, we give a white-box attack method to show that malicious users can exploit the scheduling vulnerability to benefit certain jobs. The proposed attack method only requires minor perturbations job features to significantly change the scheduling priority of these jobs. We implement both greedy and critical path based algorithms to facilitate the attacks to a state-of-the-art DRL based scheduler called Decima. Our extensive experiments on TPC-H workloads show a 62% and 66% success rate of attacks with the two algorithms. Successful attacks achieve a 18.6% and 17.5% completion time reduction.\",\"PeriodicalId\":272614,\"journal\":{\"name\":\"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)\",\"volume\":\"157 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MASCOTS50786.2020.9285955\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MASCOTS50786.2020.9285955","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

调度程序对于共享计算机集群中的资源管理是必不可少的,特别是调度算法在满足云计算基础的大规模集群中用户应用程序的服务级别目标方面起着重要作用。传统的集群调度器通常基于对在其上运行的作业模式的经验观察。目前还不清楚它们在捕捉云中各种作业的模式方面有多有效。深度强化学习(DRL)的最新进展为调度程序提供了一个新的优化框架,以系统地解决这个问题。基于drl的调度器可以从作业特征和云资源利用动态中提取详细模式,从而做出更好的调度决策。然而,调度程序使用的深度神经网络模型可能容易受到对抗性攻击。关于基于drl的调度器中的漏洞的研究有限。本文给出了一种白盒攻击方法,表明恶意用户可以利用调度漏洞对某些作业有利。所提出的攻击方法只需要对作业特征进行微小的扰动,就能显著改变这些作业的调度优先级。我们实现了贪婪算法和基于关键路径的算法,以促进对最先进的基于DRL的调度程序(称为Decima)的攻击。我们在TPC-H工作负载上的大量实验表明,这两种算法的攻击成功率分别为62%和66%。成功的攻击可以减少18.6%和17.5%的完成时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial Attacks in a Deep Reinforcement Learning based Cluster Scheduler
A scheduler is essential for resource management in a shared computer cluster, particularly scheduling algorithms play an important role in meeting service level objectives of user applications in large scale clusters that underlie cloud computing. Traditional cluster schedulers are often based on empirical observations of patterns of jobs running on them. It is unclear how effective they are for capturing the patterns of a variety of jobs in clouds. Recent advances in Deep Reinforcement Learning (DRL) promise a new optimization framework for a scheduler to systematically address the problem. A DRL-based scheduler can extract detailed patterns from job features and the dynamics of cloud resource utilization for better scheduling decisions. However, the deep neural network models used by the scheduler might be vulnerable to adversarial attacks. There is limited research investigating the vulnerability in DRL-based schedulers. In this paper, we give a white-box attack method to show that malicious users can exploit the scheduling vulnerability to benefit certain jobs. The proposed attack method only requires minor perturbations job features to significantly change the scheduling priority of these jobs. We implement both greedy and critical path based algorithms to facilitate the attacks to a state-of-the-art DRL based scheduler called Decima. Our extensive experiments on TPC-H workloads show a 62% and 66% success rate of attacks with the two algorithms. Successful attacks achieve a 18.6% and 17.5% completion time reduction.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信