可重构云中的高吞吐量多线程和积网络推理

Micha Ober, Jaco A. Hofmann, Lukas Sommer, Lukas Weber, A. Koch
{"title":"可重构云中的高吞吐量多线程和积网络推理","authors":"Micha Ober, Jaco A. Hofmann, Lukas Sommer, Lukas Weber, A. Koch","doi":"10.1109/H2RC49586.2019.00009","DOIUrl":null,"url":null,"abstract":"Large cloud providers have started to make powerful FPGAs available as part of their public cloud offers. One promising application area for this kind of instances is the acceleration of machine learning tasks. This work presents an accelerator architecture that uses multiple accelerator cores for the inference in so-called Sum-Product Networks and complements it with a host software interface that overlaps data-transfer and actual computation. The evaluation shows that, the proposed architecture deployed to Amazon AWS F1 instances is able to outperform a 12-core Xeon processor by a factor of up to 1.9x and a Nvidia Tesla V100 GPU by a factor of up to 6.6x.","PeriodicalId":413478,"journal":{"name":"2019 IEEE/ACM International Workshop on Heterogeneous High-performance Reconfigurable Computing (H2RC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"High-Throughput Multi-Threaded Sum-Product Network Inference in the Reconfigurable Cloud\",\"authors\":\"Micha Ober, Jaco A. Hofmann, Lukas Sommer, Lukas Weber, A. Koch\",\"doi\":\"10.1109/H2RC49586.2019.00009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large cloud providers have started to make powerful FPGAs available as part of their public cloud offers. One promising application area for this kind of instances is the acceleration of machine learning tasks. This work presents an accelerator architecture that uses multiple accelerator cores for the inference in so-called Sum-Product Networks and complements it with a host software interface that overlaps data-transfer and actual computation. The evaluation shows that, the proposed architecture deployed to Amazon AWS F1 instances is able to outperform a 12-core Xeon processor by a factor of up to 1.9x and a Nvidia Tesla V100 GPU by a factor of up to 6.6x.\",\"PeriodicalId\":413478,\"journal\":{\"name\":\"2019 IEEE/ACM International Workshop on Heterogeneous High-performance Reconfigurable Computing (H2RC)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE/ACM International Workshop on Heterogeneous High-performance Reconfigurable Computing (H2RC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/H2RC49586.2019.00009\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE/ACM International Workshop on Heterogeneous High-performance Reconfigurable Computing (H2RC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/H2RC49586.2019.00009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

大型云提供商已经开始将功能强大的fpga作为其公共云服务的一部分。这类实例的一个有前途的应用领域是机器学习任务的加速。这项工作提出了一种加速器架构,它使用多个加速器核心来进行所谓的和积网络中的推理,并通过一个主机软件接口来补充它,该接口可以重叠数据传输和实际计算。评估表明,部署到Amazon AWS F1实例上的拟议架构能够比12核至强处理器的性能高出1.9倍,比Nvidia Tesla V100 GPU的性能高出6.6倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
High-Throughput Multi-Threaded Sum-Product Network Inference in the Reconfigurable Cloud
Large cloud providers have started to make powerful FPGAs available as part of their public cloud offers. One promising application area for this kind of instances is the acceleration of machine learning tasks. This work presents an accelerator architecture that uses multiple accelerator cores for the inference in so-called Sum-Product Networks and complements it with a host software interface that overlaps data-transfer and actual computation. The evaluation shows that, the proposed architecture deployed to Amazon AWS F1 instances is able to outperform a 12-core Xeon processor by a factor of up to 1.9x and a Nvidia Tesla V100 GPU by a factor of up to 6.6x.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信