Adaptive Control of Apache Spark's Data Caching Mechanism Based on Workload Characteristics

Hideo Inagaki, Tomoyuki Fujii, Ryota Kawashima, H. Matsuo
{"title":"Adaptive Control of Apache Spark's Data Caching Mechanism Based on Workload Characteristics","authors":"Hideo Inagaki, Tomoyuki Fujii, Ryota Kawashima, H. Matsuo","doi":"10.1109/W-FICLOUD.2018.00016","DOIUrl":null,"url":null,"abstract":"Apache Spark caches reusable data into memory/disk. From our preliminary evaluation, we have found that a memory-and-disk caching is ineffective compared to disk-only caching when memory usage has reached its limit. This is because a thrashing state involving frequent data move between the memory and the disk occurs for a memory-and-disk caching. Spark has introduced a thrashing avoidance method for a single RDD (Resilient Distributed Dataset), but it cannot be applied to workloads using multiple RDDs because prior detection of the dependencies between the RDDs is difficult due to unpredictable access pattern. In this paper, we propose a thrashing avoidance method for such workloads. Our method adaptively modifies the cache I/O behavior depending on characteristics of the workload. In particular, caching data are directly written to the disk instead of the memory if cached data are frequently moved from the memory to the disk. Further, cached data are directly returned to the execution-memory instead of the storage-memory if cached data in the disk are required. Our method can adaptively select the optimal cache I/O behavior by observing workload characteristics at runtime instead of analyzing the dependence among RDDs. Evaluation results showed that execution time was reduced by 33% for KMeans using the modified Spark memory-and-disk caching rather than the original.","PeriodicalId":218683,"journal":{"name":"2018 6th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 6th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/W-FICLOUD.2018.00016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Apache Spark caches reusable data into memory/disk. From our preliminary evaluation, we have found that a memory-and-disk caching is ineffective compared to disk-only caching when memory usage has reached its limit. This is because a thrashing state involving frequent data move between the memory and the disk occurs for a memory-and-disk caching. Spark has introduced a thrashing avoidance method for a single RDD (Resilient Distributed Dataset), but it cannot be applied to workloads using multiple RDDs because prior detection of the dependencies between the RDDs is difficult due to unpredictable access pattern. In this paper, we propose a thrashing avoidance method for such workloads. Our method adaptively modifies the cache I/O behavior depending on characteristics of the workload. In particular, caching data are directly written to the disk instead of the memory if cached data are frequently moved from the memory to the disk. Further, cached data are directly returned to the execution-memory instead of the storage-memory if cached data in the disk are required. Our method can adaptively select the optimal cache I/O behavior by observing workload characteristics at runtime instead of analyzing the dependence among RDDs. Evaluation results showed that execution time was reduced by 33% for KMeans using the modified Spark memory-and-disk caching rather than the original.
基于工作负载特征的Apache Spark数据缓存机制自适应控制
Apache Spark将可重用数据缓存到内存/磁盘中。从我们的初步评估中,我们发现当内存使用达到极限时,内存和磁盘缓存与仅磁盘缓存相比是无效的。这是因为在内存和磁盘缓存中会出现涉及在内存和磁盘之间频繁移动数据的抖动状态。Spark已经为单个RDD(弹性分布式数据集)引入了一种避免抖动的方法,但它不能应用于使用多个RDD的工作负载,因为由于不可预测的访问模式,很难预先检测RDD之间的依赖关系。在本文中,我们提出了一种避免这种工作负载抖动的方法。我们的方法根据工作负载的特征自适应地修改缓存I/O行为。特别是,如果缓存的数据经常从内存移动到磁盘,则缓存数据直接写入磁盘而不是内存。此外,如果需要磁盘中的缓存数据,则将缓存的数据直接返回到执行内存,而不是存储内存。我们的方法可以通过在运行时观察工作负载特征,而不是分析rdd之间的依赖关系,自适应地选择最佳的缓存I/O行为。评估结果表明,使用修改后的Spark内存和磁盘缓存的KMeans执行时间比原来的KMeans减少了33%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信