A. Floratou, N. Megiddo, Navneet Potti, Fatma Özcan, Uday Kale, Jan Schmitz-Hermes
{"title":"Adaptive Caching in Big SQL using the HDFS Cache","authors":"A. Floratou, N. Megiddo, Navneet Potti, Fatma Özcan, Uday Kale, Jan Schmitz-Hermes","doi":"10.1145/2987550.2987553","DOIUrl":null,"url":null,"abstract":"The memory and storage hierarchy in database systems is currently undergoing a radical evolution in the context of Big Data systems. SQL-on-Hadoop systems share data with other applications in the Big Data ecosystem by storing their data in HDFS, using open file formats. However, they do not provide automatic caching mechanisms for storing data in memory. In this paper, we describe the architecture of IBM Big SQL and its use of the HDFS cache as an alternative to the traditional buffer pool, allowing in-memory data to be shared with other Big Data applications. We design novel adaptive caching algorithms for Big SQL tailored to the challenges of such an external cache scenario. Our experimental evaluation shows that only our adaptive algorithms perform well for diverse workload characteristics, and are able to adapt to evolving data access patterns. Finally, we discuss our experiences in addressing the new challenges imposed by external caching and summarize our insights about how to direct ongoing architectural evolution of external caching mechanisms.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Seventh ACM Symposium on Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2987550.2987553","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29
Abstract
The memory and storage hierarchy in database systems is currently undergoing a radical evolution in the context of Big Data systems. SQL-on-Hadoop systems share data with other applications in the Big Data ecosystem by storing their data in HDFS, using open file formats. However, they do not provide automatic caching mechanisms for storing data in memory. In this paper, we describe the architecture of IBM Big SQL and its use of the HDFS cache as an alternative to the traditional buffer pool, allowing in-memory data to be shared with other Big Data applications. We design novel adaptive caching algorithms for Big SQL tailored to the challenges of such an external cache scenario. Our experimental evaluation shows that only our adaptive algorithms perform well for diverse workload characteristics, and are able to adapt to evolving data access patterns. Finally, we discuss our experiences in addressing the new challenges imposed by external caching and summarize our insights about how to direct ongoing architectural evolution of external caching mechanisms.
在大数据系统的背景下,数据库系统中的内存和存储层次结构正在经历一场彻底的变革。SQL-on-Hadoop系统通过使用开放文件格式将数据存储在HDFS中,从而与大数据生态系统中的其他应用程序共享数据。但是,它们不提供在内存中存储数据的自动缓存机制。在本文中,我们描述了IBM Big SQL的架构,以及它使用HDFS缓存作为传统缓冲池的替代方案,允许内存中的数据与其他大数据应用程序共享。我们为大SQL设计了新颖的自适应缓存算法,以应对这种外部缓存场景的挑战。我们的实验评估表明,只有我们的自适应算法在不同的工作负载特征下表现良好,并且能够适应不断变化的数据访问模式。最后,我们讨论了我们在应对外部缓存带来的新挑战方面的经验,并总结了我们对如何指导外部缓存机制正在进行的架构演变的见解。