{"title":"Hippo:对HDFS的管道感知内存缓存的增强","authors":"Lan Wei, W. Lian, Kuien Liu, Yongji Wang","doi":"10.1109/ICCCN.2014.6911847","DOIUrl":null,"url":null,"abstract":"In the age of big data, distributed computing frameworks tend to coexist and collaborate in pipeline using one scheduler. While a variety of techniques for reducing I/O latency have been proposed, these are rarely specific for the whole pipeline performance. This paper proposes memory management logic called “Hippo” which targets distributed systems and in particular “pipelined” applications that might span differing big data frameworks. Though individual frameworks may have internal memory management primitives, Hippo proposes to make a generic framework that works agnostic of these highlevel operations. To increase the hit ratio of in-memory cache, this paper discusses the granularity of caching and how Hippo leverages the job dependency graph to make memory retention and pre-fetching decisions. Our evaluations demonstrate that job dependency is essential to improve the cache performance and a global cache policy maker, in most cases, significantly outperforms explicit caching by users.","PeriodicalId":404048,"journal":{"name":"2014 23rd International Conference on Computer Communication and Networks (ICCCN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Hippo: An enhancement of pipeline-aware in-memory caching for HDFS\",\"authors\":\"Lan Wei, W. Lian, Kuien Liu, Yongji Wang\",\"doi\":\"10.1109/ICCCN.2014.6911847\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the age of big data, distributed computing frameworks tend to coexist and collaborate in pipeline using one scheduler. While a variety of techniques for reducing I/O latency have been proposed, these are rarely specific for the whole pipeline performance. This paper proposes memory management logic called “Hippo” which targets distributed systems and in particular “pipelined” applications that might span differing big data frameworks. Though individual frameworks may have internal memory management primitives, Hippo proposes to make a generic framework that works agnostic of these highlevel operations. To increase the hit ratio of in-memory cache, this paper discusses the granularity of caching and how Hippo leverages the job dependency graph to make memory retention and pre-fetching decisions. Our evaluations demonstrate that job dependency is essential to improve the cache performance and a global cache policy maker, in most cases, significantly outperforms explicit caching by users.\",\"PeriodicalId\":404048,\"journal\":{\"name\":\"2014 23rd International Conference on Computer Communication and Networks (ICCCN)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-09-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 23rd International Conference on Computer Communication and Networks (ICCCN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCN.2014.6911847\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 23rd International Conference on Computer Communication and Networks (ICCCN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCN.2014.6911847","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hippo: An enhancement of pipeline-aware in-memory caching for HDFS
In the age of big data, distributed computing frameworks tend to coexist and collaborate in pipeline using one scheduler. While a variety of techniques for reducing I/O latency have been proposed, these are rarely specific for the whole pipeline performance. This paper proposes memory management logic called “Hippo” which targets distributed systems and in particular “pipelined” applications that might span differing big data frameworks. Though individual frameworks may have internal memory management primitives, Hippo proposes to make a generic framework that works agnostic of these highlevel operations. To increase the hit ratio of in-memory cache, this paper discusses the granularity of caching and how Hippo leverages the job dependency graph to make memory retention and pre-fetching decisions. Our evaluations demonstrate that job dependency is essential to improve the cache performance and a global cache policy maker, in most cases, significantly outperforms explicit caching by users.