{"title":"A Cost-Benefit Scheme for High Performance Predictive Prefetching","authors":"V. Vellanki, A. Chervenak","doi":"10.1145/331532.331582","DOIUrl":null,"url":null,"abstract":"High-performance computing systems will increasingly rely on prefetching data from disk to overcome long disk access times and maintain high utilization of parallel I/O systems. This paper evaluates a prefetching technique that chooses which blocks to prefetch based on their probability of access and decides whether to prefetch a particular block at a given time using a cost-benefit analysis. The algorithm uses a probability tree to record past accesses and to predict future access patterns. We simulate this prefetching algorithm with a variety of I/O traces. We show that our predictive prefetching scheme combined with simple one-block-lookahead prefetching produces good performance for a variety of workloads. The scheme reduces file cache miss rates by up to 36% for workloads that receive no benefit from sequential prefetching. We showthat the memory requirements for building the probability tree are reasonable, requiring about a megabyte for good performance. The probability tree constructed by the prefetching scheme predicts around 60-70% of the accesses. Next, we discuss ways of improving the performance of the prefetching scheme. Finally, we show that the cost-benefit analysis enables the tree-based prefetching scheme to perform an optimal amount of prefetching.","PeriodicalId":354898,"journal":{"name":"ACM/IEEE SC 1999 Conference (SC'99)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"30","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM/IEEE SC 1999 Conference (SC'99)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/331532.331582","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 30
Abstract
High-performance computing systems will increasingly rely on prefetching data from disk to overcome long disk access times and maintain high utilization of parallel I/O systems. This paper evaluates a prefetching technique that chooses which blocks to prefetch based on their probability of access and decides whether to prefetch a particular block at a given time using a cost-benefit analysis. The algorithm uses a probability tree to record past accesses and to predict future access patterns. We simulate this prefetching algorithm with a variety of I/O traces. We show that our predictive prefetching scheme combined with simple one-block-lookahead prefetching produces good performance for a variety of workloads. The scheme reduces file cache miss rates by up to 36% for workloads that receive no benefit from sequential prefetching. We showthat the memory requirements for building the probability tree are reasonable, requiring about a megabyte for good performance. The probability tree constructed by the prefetching scheme predicts around 60-70% of the accesses. Next, we discuss ways of improving the performance of the prefetching scheme. Finally, we show that the cost-benefit analysis enables the tree-based prefetching scheme to perform an optimal amount of prefetching.