Usage Pattern Based Prefetching For Mechanical Mass Storage

S. Sarwar, Y. Mahmood, H. F. Ahmed, Raihan-Ur-Rasool, H. Takahashi
{"title":"Usage Pattern Based Prefetching For Mechanical Mass Storage","authors":"S. Sarwar, Y. Mahmood, H. F. Ahmed, Raihan-Ur-Rasool, H. Takahashi","doi":"10.1109/HONET.2008.4810223","DOIUrl":null,"url":null,"abstract":"Cache being the fastest medium in memory hierarchy has a vital role to play in concealing delays and access latencies during 10 operations and hence in improving system response time. One of the most substantial approaches to fully exploit the significance of cache memory is data prefetching, where we envisage future requests of users and take data to memory in advance. Current prefetching techniques, performing limited prefetching, are based upon locality of reference principle (situation specific); Markov series (slow for practical implementation) or dual data caching (quite burdensome for programmer) with biased cache replacement policies. So we present a novel 'usage pattern based' approach for predictive prefetching; employing proven neural networks to broaden the scope of prefetching at user level. The efficacy of approach is revealed by its accuracy and minimal resource usage as affirmed by preliminary results.","PeriodicalId":433243,"journal":{"name":"2008 International Symposium on High Capacity Optical Networks and Enabling Technologies","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 International Symposium on High Capacity Optical Networks and Enabling Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HONET.2008.4810223","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Cache being the fastest medium in memory hierarchy has a vital role to play in concealing delays and access latencies during 10 operations and hence in improving system response time. One of the most substantial approaches to fully exploit the significance of cache memory is data prefetching, where we envisage future requests of users and take data to memory in advance. Current prefetching techniques, performing limited prefetching, are based upon locality of reference principle (situation specific); Markov series (slow for practical implementation) or dual data caching (quite burdensome for programmer) with biased cache replacement policies. So we present a novel 'usage pattern based' approach for predictive prefetching; employing proven neural networks to broaden the scope of prefetching at user level. The efficacy of approach is revealed by its accuracy and minimal resource usage as affirmed by preliminary results.
基于使用模式的机械大容量存储预取
缓存是内存层次结构中最快的介质,在隐藏10个操作期间的延迟和访问延迟方面起着至关重要的作用,因此可以改善系统响应时间。充分利用高速缓存的重要意义的最重要的方法之一是数据预取,我们设想用户未来的请求并提前将数据存储到内存中。目前的预取技术,执行有限的预取,是基于引用的局部性原则(具体情况);马尔可夫级数(实际实现缓慢)或双数据缓存(对程序员来说相当繁重),具有有偏差的缓存替换策略。因此,我们提出了一种新的基于使用模式的预测预取方法;采用成熟的神经网络来扩大用户级预取的范围。初步结果证实了该方法的准确性和最小的资源利用率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信