自适应缓存分配与预取策略在端到端数据处理

Q3 Computer Science
H. Qin, Li Zhu
{"title":"自适应缓存分配与预取策略在端到端数据处理","authors":"H. Qin, Li Zhu","doi":"10.4236/JSIP.2017.83010","DOIUrl":null,"url":null,"abstract":"With the speed gap between storage system access and processor computing, \nend-to-end data processing has become a bottleneck to improve the total performance \nof computer systems over the Internet. Based on the analysis of data \nprocessing behavior, an adaptive cache organization scheme is proposed with \nfast address calculation. This scheme can make full use of the characteristics \nof stack space data access, adopt fast address calculation strategy, and reduce \nthe hit time of stack access. Adaptively, the stack cache can be turned off from \nbeginning to end, when a stack overflow occurs to avoid the effect of stack \nswitching on processor performance. Also, through the instruction cache and \nthe failure behavior for the data cache, a prefetching policy is developed, \nwhich is combined with the data capture of the failover queue state. Finally, \nthe proposed method can maintain the order of instruction and data access, \nwhich facilitates the extraction of prefetching in the end-to-end data processing.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"55 1","pages":"152-160"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive Cache Allocation with Prefetching Policy over End-to-End Data Processing\",\"authors\":\"H. Qin, Li Zhu\",\"doi\":\"10.4236/JSIP.2017.83010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the speed gap between storage system access and processor computing, \\nend-to-end data processing has become a bottleneck to improve the total performance \\nof computer systems over the Internet. Based on the analysis of data \\nprocessing behavior, an adaptive cache organization scheme is proposed with \\nfast address calculation. This scheme can make full use of the characteristics \\nof stack space data access, adopt fast address calculation strategy, and reduce \\nthe hit time of stack access. Adaptively, the stack cache can be turned off from \\nbeginning to end, when a stack overflow occurs to avoid the effect of stack \\nswitching on processor performance. Also, through the instruction cache and \\nthe failure behavior for the data cache, a prefetching policy is developed, \\nwhich is combined with the data capture of the failover queue state. Finally, \\nthe proposed method can maintain the order of instruction and data access, \\nwhich facilitates the extraction of prefetching in the end-to-end data processing.\",\"PeriodicalId\":38474,\"journal\":{\"name\":\"Journal of Information Hiding and Multimedia Signal Processing\",\"volume\":\"55 1\",\"pages\":\"152-160\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information Hiding and Multimedia Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4236/JSIP.2017.83010\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Hiding and Multimedia Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4236/JSIP.2017.83010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

摘要

由于存储系统访问与处理器计算之间的速度差距,端到端数据处理已成为提高Internet上计算机系统总体性能的瓶颈。在分析数据处理行为的基础上,提出了一种地址计算速度快的自适应缓存组织方案。该方案可以充分利用栈空间数据访问的特点,采用快速的地址计算策略,减少栈访问的命中时间。当发生堆栈溢出时,可以自适应地从头到尾关闭堆栈缓存,以避免堆栈切换对处理器性能的影响。此外,通过指令缓存和数据缓存的故障行为,开发了一种预取策略,该策略与故障转移队列状态的数据捕获相结合。最后,该方法能够保持指令顺序和数据访问顺序,便于端到端数据处理中预取的提取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adaptive Cache Allocation with Prefetching Policy over End-to-End Data Processing
With the speed gap between storage system access and processor computing, end-to-end data processing has become a bottleneck to improve the total performance of computer systems over the Internet. Based on the analysis of data processing behavior, an adaptive cache organization scheme is proposed with fast address calculation. This scheme can make full use of the characteristics of stack space data access, adopt fast address calculation strategy, and reduce the hit time of stack access. Adaptively, the stack cache can be turned off from beginning to end, when a stack overflow occurs to avoid the effect of stack switching on processor performance. Also, through the instruction cache and the failure behavior for the data cache, a prefetching policy is developed, which is combined with the data capture of the failover queue state. Finally, the proposed method can maintain the order of instruction and data access, which facilitates the extraction of prefetching in the end-to-end data processing.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信