Guard the Cache: Dispatch Optimization in a Contextual Role-oriented Language

L. Schütze, Cornelius Kummer, J. Castrillón
{"title":"Guard the Cache: Dispatch Optimization in a Contextual Role-oriented Language","authors":"L. Schütze, Cornelius Kummer, J. Castrillón","doi":"10.1145/3570353.3570357","DOIUrl":null,"url":null,"abstract":"Adaptive programming models are increasingly important as context-dependent software conquers more domains. One such a model is role-oriented programming where behavioral changes are implemented by objects playing and renouncing roles. As with other adaptive models, the overhead introduced by source code adaptations is a major showstopper for role-oriented programs. This is in part because the optimizations of object-oriented virtual machines (VMs) do not provide the same performance gains when applied to role-oriented programs. Recently, dispatch plans have been shown to enable optimizations beyond those in VMs, thereby improving the performance of role programs with low variability. This paper introduces guarded dispatch plans, an extension of dispatch plans with a context-aware guarding mechanism that allows reuse in high-variability scenarios. Fine-grained guards use run-time feedback to partially reuse dispatch plans across call sites when contexts are changing. We present an algorithm to construct and compose guarded dispatch plans and provide a reference implementation of the approach. We show that our approach is able to gracefully degrade into a default dispatch approach when variability increases. The implementation is evaluated with synthetic benchmarks capturing different characteristics. Compared to the state-of-the-art implementation in ObjectTeams we achieved a mean speedup of 3.3 × in static cases, 3.0 × at low variability and the same performance in highly dynamic cases.","PeriodicalId":340514,"journal":{"name":"Proceedings of the 14th ACM International Workshop on Context-Oriented Programming and Advanced Modularity","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 14th ACM International Workshop on Context-Oriented Programming and Advanced Modularity","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3570353.3570357","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Adaptive programming models are increasingly important as context-dependent software conquers more domains. One such a model is role-oriented programming where behavioral changes are implemented by objects playing and renouncing roles. As with other adaptive models, the overhead introduced by source code adaptations is a major showstopper for role-oriented programs. This is in part because the optimizations of object-oriented virtual machines (VMs) do not provide the same performance gains when applied to role-oriented programs. Recently, dispatch plans have been shown to enable optimizations beyond those in VMs, thereby improving the performance of role programs with low variability. This paper introduces guarded dispatch plans, an extension of dispatch plans with a context-aware guarding mechanism that allows reuse in high-variability scenarios. Fine-grained guards use run-time feedback to partially reuse dispatch plans across call sites when contexts are changing. We present an algorithm to construct and compose guarded dispatch plans and provide a reference implementation of the approach. We show that our approach is able to gracefully degrade into a default dispatch approach when variability increases. The implementation is evaluated with synthetic benchmarks capturing different characteristics. Compared to the state-of-the-art implementation in ObjectTeams we achieved a mean speedup of 3.3 × in static cases, 3.0 × at low variability and the same performance in highly dynamic cases.
保护缓存:面向上下文角色语言中的调度优化
随着上下文相关软件征服更多领域,自适应编程模型变得越来越重要。其中一种模型是面向角色的编程,其中通过对象扮演和放弃角色来实现行为更改。与其他自适应模型一样,源代码自适应带来的开销是面向角色程序的主要障碍。这在一定程度上是因为面向对象虚拟机(vm)的优化在应用于面向角色的程序时不能提供相同的性能增益。最近,调度计划已经被证明可以实现虚拟机之外的优化,从而提高具有低可变性的角色程序的性能。本文介绍了受保护的调度计划,这是调度计划的扩展,具有上下文感知的保护机制,允许在高可变性场景中重用。细粒度保护使用运行时反馈,以便在上下文发生变化时跨调用站点部分重用分派计划。我们提出了一种构建和编写守护调度计划的算法,并提供了该方法的参考实现。我们展示了当可变性增加时,我们的方法能够优雅地降级为默认分派方法。使用捕获不同特征的合成基准对实现进行评估。与ObjectTeams中最先进的实现相比,我们在静态情况下实现了3.3倍的平均加速,在低可变性情况下实现了3.0倍的平均加速,在高动态情况下实现了相同的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信