A Pareto-Efficient Algorithm for Data Stream Processing at Network Edges

Thanasis Loukopoulos, Nikos Tziritas, M. Koziri, G. Stamoulis, S. Khan
{"title":"A Pareto-Efficient Algorithm for Data Stream Processing at Network Edges","authors":"Thanasis Loukopoulos, Nikos Tziritas, M. Koziri, G. Stamoulis, S. Khan","doi":"10.1109/CloudCom2018.2018.00041","DOIUrl":null,"url":null,"abstract":"Data stream processing has received considerable attention from both research community and industry over the last years. Since latency is a key issue in data stream processing environments, the majority of the works existing in the literature focus on minimizing the latency experienced by the users. The aforementioned minimization takes place by assigning the data stream processing components close to data sources. Server consolidation is also a key issue for drastically reducing energy consumption in computing systems. Unfortunately, energy consumption and latency are two objective functions that may be in conflict with each other. Therefore, when the target function is to minimize energy consumption, the delay experienced by users may be considerable high, and the opposite. For the above reason there is a dire need to design strategies such that by targeting the minimization of energy consumption, there is a graceful degradation in latency, as well as the opposite. To achieve the above, we propose a Pareto-efficient algorithm that tackles the problem of data processing tasks placement simultaneously in both dimensions regarding the energy consumption and latency. The proposed algorithm outputs a set of solutions that are not dominated by any solution within the set regarding energy consumption and latency. The experimental results show that the proposed approach is superior against single-solution approaches because by targeting one objective function the other one can be gracefully degraded by choosing the appropriate solution.","PeriodicalId":365939,"journal":{"name":"2018 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CloudCom2018.2018.00041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Data stream processing has received considerable attention from both research community and industry over the last years. Since latency is a key issue in data stream processing environments, the majority of the works existing in the literature focus on minimizing the latency experienced by the users. The aforementioned minimization takes place by assigning the data stream processing components close to data sources. Server consolidation is also a key issue for drastically reducing energy consumption in computing systems. Unfortunately, energy consumption and latency are two objective functions that may be in conflict with each other. Therefore, when the target function is to minimize energy consumption, the delay experienced by users may be considerable high, and the opposite. For the above reason there is a dire need to design strategies such that by targeting the minimization of energy consumption, there is a graceful degradation in latency, as well as the opposite. To achieve the above, we propose a Pareto-efficient algorithm that tackles the problem of data processing tasks placement simultaneously in both dimensions regarding the energy consumption and latency. The proposed algorithm outputs a set of solutions that are not dominated by any solution within the set regarding energy consumption and latency. The experimental results show that the proposed approach is superior against single-solution approaches because by targeting one objective function the other one can be gracefully degraded by choosing the appropriate solution.
网络边缘数据流处理的Pareto-Efficient算法
数据流处理在过去几年中受到了研究界和工业界的广泛关注。由于延迟是数据流处理环境中的一个关键问题,现有文献中的大多数工作都集中在最小化用户所经历的延迟上。通过将数据流处理组件分配到数据源附近,可以实现上述最小化。服务器整合也是大幅度降低计算系统能耗的一个关键问题。不幸的是,能量消耗和延迟是两个可能相互冲突的目标函数。因此,当目标函数是最小化能耗时,用户体验到的延迟可能会相当高,反之亦然。由于上述原因,我们迫切需要设计这样的策略,即通过最小化能耗来降低延迟,或者相反。为了实现上述目标,我们提出了一种帕累托高效算法,该算法在能耗和延迟的两个维度上同时解决数据处理任务的放置问题。提出的算法输出一组解决方案,这些解决方案在能量消耗和延迟方面不受任何解决方案的支配。实验结果表明,该方法优于单解方法,因为通过选择合适的解,可以针对一个目标函数优雅地降级另一个目标函数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信