Offline Scheduling of Multi-threaded Request Streams on a Caching Server

V. Rehn-Sonigo, D. Trystram, Frédéric Wagner, Haifeng Xu, Guochuan Zhang
{"title":"Offline Scheduling of Multi-threaded Request Streams on a Caching Server","authors":"V. Rehn-Sonigo, D. Trystram, Frédéric Wagner, Haifeng Xu, Guochuan Zhang","doi":"10.1109/IPDPS.2011.111","DOIUrl":null,"url":null,"abstract":"In this work, we are interested in the problem of satisfying multiple concurrent requests submitted to a computing server. Informally, there are users each sending a sequence of requests to the server. The requests consist of tasks linked by precedence constraints. Tasks may occur several times in the same sequence as well as in a request sequence of another user. The computing server has to execute tasks with variable processing times. The server owns a cache of limited size where intermediate results of the processing may be stored. If an intermediate result for a task is stored into the cache, no processing cost has to be paid and the result can directly be fetched from the cache. The goal of this work is to determine a schedule of the tasks such that an optimization function is minimized (the only objective studied up to now is the make span). This problem is a variant of caching which considers only one sequence of requests. We then extend the study to the minimization of the mean completion time of the request sequences. Two models are considered. In the first model, caching is forced whereas in the second model caching is optional and one can choose whether an intermediate result is stored in the cache or not. All combinations turn out to be NP-hard for fixed cache sizes and we provide a formulation as dynamic program as well as bounds for in approximation. We propose polynomial time approximation algorithms for some variants and analyze their approximation ratios. Finally, we also devise some heuristics and present experimental results. Tasks may occur several times in the same sequence as well as in a request sequence of another user. The computing server has to execute tasks with variable processing times. The server owns a cache of limited size where intermediate results of the processing may be stored. If an intermediate result for a task is stored into the cache, no processing cost has to be paid and the result can directly be fetched from the cache. The goal of this work is to determine a schedule of the tasks such that an optimization function is minimized (the only objective studied up to now is the make span). This problem is a variant of caching which considers only one sequence of requests. We then extend the study to the minimization of the mean completion time of the request sequences. Two models are considered. In the first model, caching is forced whereas in the second model caching is optional and one can choose whether an intermediate result is stored in the cache or not. All combinations turn out to be NP-hard for fixed cache sizes and we provide a formulation as dynamic program as well as bounds for in approximation. We propose polynomial time approximation algorithms for some variants and analyze their approximation ratios. Finally, we also devise some heuristics and present experimental results.","PeriodicalId":355100,"journal":{"name":"2011 IEEE International Parallel & Distributed Processing Symposium","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE International Parallel & Distributed Processing Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPS.2011.111","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

In this work, we are interested in the problem of satisfying multiple concurrent requests submitted to a computing server. Informally, there are users each sending a sequence of requests to the server. The requests consist of tasks linked by precedence constraints. Tasks may occur several times in the same sequence as well as in a request sequence of another user. The computing server has to execute tasks with variable processing times. The server owns a cache of limited size where intermediate results of the processing may be stored. If an intermediate result for a task is stored into the cache, no processing cost has to be paid and the result can directly be fetched from the cache. The goal of this work is to determine a schedule of the tasks such that an optimization function is minimized (the only objective studied up to now is the make span). This problem is a variant of caching which considers only one sequence of requests. We then extend the study to the minimization of the mean completion time of the request sequences. Two models are considered. In the first model, caching is forced whereas in the second model caching is optional and one can choose whether an intermediate result is stored in the cache or not. All combinations turn out to be NP-hard for fixed cache sizes and we provide a formulation as dynamic program as well as bounds for in approximation. We propose polynomial time approximation algorithms for some variants and analyze their approximation ratios. Finally, we also devise some heuristics and present experimental results. Tasks may occur several times in the same sequence as well as in a request sequence of another user. The computing server has to execute tasks with variable processing times. The server owns a cache of limited size where intermediate results of the processing may be stored. If an intermediate result for a task is stored into the cache, no processing cost has to be paid and the result can directly be fetched from the cache. The goal of this work is to determine a schedule of the tasks such that an optimization function is minimized (the only objective studied up to now is the make span). This problem is a variant of caching which considers only one sequence of requests. We then extend the study to the minimization of the mean completion time of the request sequences. Two models are considered. In the first model, caching is forced whereas in the second model caching is optional and one can choose whether an intermediate result is stored in the cache or not. All combinations turn out to be NP-hard for fixed cache sizes and we provide a formulation as dynamic program as well as bounds for in approximation. We propose polynomial time approximation algorithms for some variants and analyze their approximation ratios. Finally, we also devise some heuristics and present experimental results.
缓存服务器上多线程请求流的离线调度
在这项工作中,我们感兴趣的是满足提交给计算服务器的多个并发请求的问题。非正式地,每个用户都向服务器发送一系列请求。请求由按优先约束链接的任务组成。任务可以在同一序列中出现多次,也可以在另一个用户的请求序列中出现多次。计算服务器必须以可变的处理时间执行任务。服务器拥有一个有限大小的缓存,其中可以存储处理的中间结果。如果将任务的中间结果存储到缓存中,则不需要支付处理成本,并且可以直接从缓存中获取结果。这项工作的目标是确定任务的时间表,以便最小化优化函数(到目前为止研究的唯一目标是make span)。这个问题是缓存的一个变体,它只考虑一个请求序列。然后,我们将研究扩展到最小化请求序列的平均完成时间。考虑了两种模型。在第一个模型中,缓存是强制的,而在第二个模型中,缓存是可选的,可以选择是否将中间结果存储在缓存中。对于固定的缓存大小,所有的组合都是np困难的,我们提供了一个动态规划的公式以及近似的边界。我们提出了一些变量的多项式时间逼近算法,并分析了它们的逼近比率。最后,我们还设计了一些启发式方法并给出了实验结果。任务可以在同一序列中出现多次,也可以在另一个用户的请求序列中出现多次。计算服务器必须以可变的处理时间执行任务。服务器拥有一个有限大小的缓存,其中可以存储处理的中间结果。如果将任务的中间结果存储到缓存中,则不需要支付处理成本,并且可以直接从缓存中获取结果。这项工作的目标是确定任务的时间表,以便最小化优化函数(到目前为止研究的唯一目标是make span)。这个问题是缓存的一个变体,它只考虑一个请求序列。然后,我们将研究扩展到最小化请求序列的平均完成时间。考虑了两种模型。在第一个模型中,缓存是强制的,而在第二个模型中,缓存是可选的,可以选择是否将中间结果存储在缓存中。对于固定的缓存大小,所有的组合都是np困难的,我们提供了一个动态规划的公式以及近似的边界。我们提出了一些变量的多项式时间逼近算法,并分析了它们的逼近比率。最后,我们还设计了一些启发式方法并给出了实验结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信