利用局部预取策略获得时间可预测性

Bekim Cilku, P. Puschner
{"title":"利用局部预取策略获得时间可预测性","authors":"Bekim Cilku, P. Puschner","doi":"10.1109/ISORCW.2011.41","DOIUrl":null,"url":null,"abstract":"Today's embedded systems are considering cache as inherent part of their design. Unfortunately, cache memory behavior heavily depends on the past references which model a large execution history and makes WCET analysis impractical. This paper presents a novel prefetch memory mechanism that simplifies the prediction of cache hits/misses because the memory access times are independent of the execution history. We use local prefetching into on-chip memory together with a custom-designed prefetch controller instead of cache memories to provide for time-predictable memory accesses. To be competitive in code execution time, our approach relies on a special organization of main memory and on a modified compiler that generates code layouts to allow for parallel prefetching from different memory banks. The proposed solution is still in a conceptual phase. The paper discusses design decisions and parameters to be explored.","PeriodicalId":126022,"journal":{"name":"2011 14th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Using a Local Prefetch Strategy to Obtain Temporal Time Predictability\",\"authors\":\"Bekim Cilku, P. Puschner\",\"doi\":\"10.1109/ISORCW.2011.41\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Today's embedded systems are considering cache as inherent part of their design. Unfortunately, cache memory behavior heavily depends on the past references which model a large execution history and makes WCET analysis impractical. This paper presents a novel prefetch memory mechanism that simplifies the prediction of cache hits/misses because the memory access times are independent of the execution history. We use local prefetching into on-chip memory together with a custom-designed prefetch controller instead of cache memories to provide for time-predictable memory accesses. To be competitive in code execution time, our approach relies on a special organization of main memory and on a modified compiler that generates code layouts to allow for parallel prefetching from different memory banks. The proposed solution is still in a conceptual phase. The paper discusses design decisions and parameters to be explored.\",\"PeriodicalId\":126022,\"journal\":{\"name\":\"2011 14th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 14th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISORCW.2011.41\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 14th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISORCW.2011.41","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

今天的嵌入式系统正在考虑将缓存作为其设计的固有部分。不幸的是,缓存行为严重依赖于过去的引用,这些引用模拟了大量的执行历史,使得WCET分析不切实际。本文提出了一种新的预取内存机制,该机制简化了缓存命中/未命中的预测,因为内存访问时间与执行历史无关。我们使用本地预取到片上存储器,并使用定制设计的预取控制器代替缓存存储器,以提供时间可预测的存储器访问。为了在代码执行时间上具有竞争力,我们的方法依赖于一个特殊的主存组织和一个修改过的编译器,该编译器生成的代码布局允许从不同的内存库并行预取。提出的解决办法仍处于概念阶段。本文讨论了设计决策和有待探讨的参数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Using a Local Prefetch Strategy to Obtain Temporal Time Predictability
Today's embedded systems are considering cache as inherent part of their design. Unfortunately, cache memory behavior heavily depends on the past references which model a large execution history and makes WCET analysis impractical. This paper presents a novel prefetch memory mechanism that simplifies the prediction of cache hits/misses because the memory access times are independent of the execution history. We use local prefetching into on-chip memory together with a custom-designed prefetch controller instead of cache memories to provide for time-predictable memory accesses. To be competitive in code execution time, our approach relies on a special organization of main memory and on a modified compiler that generates code layouts to allow for parallel prefetching from different memory banks. The proposed solution is still in a conceptual phase. The paper discusses design decisions and parameters to be explored.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信