Sparse-HeteroCL: From Sparse Tensor Algebra to Highly Customized Accelerators on FPGAs

Jize Pang, Lei Gong, Chao Wang, Xuehai Zhou
{"title":"Sparse-HeteroCL: From Sparse Tensor Algebra to Highly Customized Accelerators on FPGAs","authors":"Jize Pang, Lei Gong, Chao Wang, Xuehai Zhou","doi":"10.1109/CCGridW59191.2023.00061","DOIUrl":null,"url":null,"abstract":"Hardware-oriented domain-specific languages and hardware autogeneration pipelines for computationally intensive applications have received widespread attention because they can reduce the complexity of custom accelerator design and enable efficient FPGA accelerator generation. However, the existing hardware autogeneration tools are intended only for general computing and lack support for sparse tensor computation. To solve this problem, we present an end-to-end compilation tool called Sparse-HeteroCL, which inherits the idea of decoupling algorithm specification and customization from HeteroCL and expands it in three aspects: data structure, program description and computation schedule. In a preliminary performance evaluation, the workload incurred when using this compilation tool to write sparse tensor accelerators was compared with the corresponding workloads based on HeteroCL and HDL. The results show that compared with these existing languages, the programming efficiency is increased by average factors of 5.94 and 386.7, respectively, using our compilation tool.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGridW59191.2023.00061","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Hardware-oriented domain-specific languages and hardware autogeneration pipelines for computationally intensive applications have received widespread attention because they can reduce the complexity of custom accelerator design and enable efficient FPGA accelerator generation. However, the existing hardware autogeneration tools are intended only for general computing and lack support for sparse tensor computation. To solve this problem, we present an end-to-end compilation tool called Sparse-HeteroCL, which inherits the idea of decoupling algorithm specification and customization from HeteroCL and expands it in three aspects: data structure, program description and computation schedule. In a preliminary performance evaluation, the workload incurred when using this compilation tool to write sparse tensor accelerators was compared with the corresponding workloads based on HeteroCL and HDL. The results show that compared with these existing languages, the programming efficiency is increased by average factors of 5.94 and 386.7, respectively, using our compilation tool.
稀疏-异ocl:从稀疏张量代数到fpga上的高度定制加速器
面向硬件的领域特定语言和用于计算密集型应用的硬件自动生成管道已受到广泛关注,因为它们可以降低定制加速器设计的复杂性并实现高效的FPGA加速器生成。然而,现有的硬件自动生成工具仅针对一般计算,缺乏对稀疏张量计算的支持。为了解决这一问题,我们提出了一种端到端的编译工具——Sparse-HeteroCL,它继承了异ocl中算法规范和自定义解耦的思想,并从数据结构、程序描述和计算进度三个方面对其进行了扩展。在初步的性能评估中,比较了使用该编译工具编写稀疏张量加速器与基于HeteroCL和HDL的相应工作量。结果表明,与现有语言相比,使用我们的编译工具,编程效率平均分别提高了5.94倍和386.7倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信