Polyhedral Tensor Schedulers

Benoît Meister, E. Papenhausen, B. Pradelle
{"title":"Polyhedral Tensor Schedulers","authors":"Benoît Meister, E. Papenhausen, B. Pradelle","doi":"10.1109/HPCS48598.2019.9188233","DOIUrl":null,"url":null,"abstract":"Compiler optimizations based on the polyhedral model are able to automatically parallelize and optimize loop-based code. We acknowledge that while polyhedral techniques can represent a broad set of program transformations, important classes of programs could be parallelized just as well using less general but more tractable techniques. We apply this general idea to the polyhedral scheduling phase, which is one of the typical performance bottlenecks of polyhedral compilation.We focus on a class of programs in which enough parallelism is already exposed in the source program, and which includes Deep Learning layers and combinations thereof, as well as multilinear algebra kernels. We call these programs “tensor codes and consequently call “tensor schedulers” the tractable polyhedral scheduling techniques presented here.The general idea is that we can significantly speed up polyhedral scheduling by restricting the set of transformations considered. As an extra benefit, having a small search space allows us to introduce non-linear cost models, which fills a gap in polyhedral cost models.","PeriodicalId":371856,"journal":{"name":"2019 International Conference on High Performance Computing & Simulation (HPCS)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on High Performance Computing & Simulation (HPCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCS48598.2019.9188233","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Compiler optimizations based on the polyhedral model are able to automatically parallelize and optimize loop-based code. We acknowledge that while polyhedral techniques can represent a broad set of program transformations, important classes of programs could be parallelized just as well using less general but more tractable techniques. We apply this general idea to the polyhedral scheduling phase, which is one of the typical performance bottlenecks of polyhedral compilation.We focus on a class of programs in which enough parallelism is already exposed in the source program, and which includes Deep Learning layers and combinations thereof, as well as multilinear algebra kernels. We call these programs “tensor codes and consequently call “tensor schedulers” the tractable polyhedral scheduling techniques presented here.The general idea is that we can significantly speed up polyhedral scheduling by restricting the set of transformations considered. As an extra benefit, having a small search space allows us to introduce non-linear cost models, which fills a gap in polyhedral cost models.
多面体张量调度器
基于多面体模型的编译器优化能够自动并行化和优化基于循环的代码。我们承认,虽然多面体技术可以代表广泛的程序转换集,但重要的程序类也可以使用不那么通用但更易于处理的技术来并行化。我们将这一一般思想应用于多面体调度阶段,这是多面体编译的典型性能瓶颈之一。我们专注于一类程序,其中足够的并行性已经在源程序中暴露出来,其中包括深度学习层及其组合,以及多线性代数核。我们称这些程序为“张量代码”,因此称“张量调度器”为这里介绍的可处理的多面体调度技术。一般的想法是,我们可以通过限制所考虑的变换集来显著加快多面体调度。作为一个额外的好处,有一个小的搜索空间允许我们引入非线性成本模型,这填补了多面体成本模型的空白。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信