Demo: Resource Allocation for Wafer-Scale Deep Learning Accelerator

Huihong Peng, Longkun Guo, Long Sun, Xiaoyan Zhang
{"title":"Demo: Resource Allocation for Wafer-Scale Deep Learning Accelerator","authors":"Huihong Peng, Longkun Guo, Long Sun, Xiaoyan Zhang","doi":"10.1109/ICDCS51616.2021.00114","DOIUrl":null,"url":null,"abstract":"Due to the rapid development of deep learning (DL) has brought, artificial intelligence (AI) chips were invented incorperating the traditional computing architecture with the simulated neural network structure for the sake of improving the energy efficiency. Recently, emerging deep learning AI chips imposed the challenge of allocating computing resources according to a deep neural networks (DNN), such that tasks using the DNN can be processed in a parallel and distributed manner. In this paper, we combine graph theory and combinatorial optimization technology to devise a fast floorplanning approach based on kernel graph structure, which is provided by Cerebras Systems Inc. for mapping the layers of DNN to the mesh of computing units called Wafer-Scale-Engine (WSE). Numerical experiments were carried out to evaluate our method using the public benchmarks and evaluation criteria, demonstrating its performance gain comparing to the state-of-art algorithms.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS51616.2021.00114","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Due to the rapid development of deep learning (DL) has brought, artificial intelligence (AI) chips were invented incorperating the traditional computing architecture with the simulated neural network structure for the sake of improving the energy efficiency. Recently, emerging deep learning AI chips imposed the challenge of allocating computing resources according to a deep neural networks (DNN), such that tasks using the DNN can be processed in a parallel and distributed manner. In this paper, we combine graph theory and combinatorial optimization technology to devise a fast floorplanning approach based on kernel graph structure, which is provided by Cerebras Systems Inc. for mapping the layers of DNN to the mesh of computing units called Wafer-Scale-Engine (WSE). Numerical experiments were carried out to evaluate our method using the public benchmarks and evaluation criteria, demonstrating its performance gain comparing to the state-of-art algorithms.
随着深度学习技术的飞速发展,为了提高能效,人们发明了将传统计算架构与模拟神经网络结构相结合的人工智能芯片。最近,新兴的深度学习人工智能芯片提出了根据深度神经网络(DNN)分配计算资源的挑战,使得使用DNN的任务可以以并行和分布式的方式处理。本文将图论与组合优化技术相结合,设计了一种基于核图结构的快速布局方法,该方法由Cerebras Systems公司提供,用于将深度神经网络层映射到称为晶圆规模引擎(WSE)的计算单元网格上。数值实验使用公共基准和评估标准来评估我们的方法,与最先进的算法相比,展示了它的性能增益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信