面向特定领域计算的EDA:专题小组简介

I. Jiang, D. Chinnery
{"title":"面向特定领域计算的EDA:专题小组简介","authors":"I. Jiang, D. Chinnery","doi":"10.1145/3569052.3580221","DOIUrl":null,"url":null,"abstract":"This panel explores domain-specific computing from hardware, software, and electronic design automation (EDA) perspectives. Hennessey and Patterson signaled a new \"golden age of computer architecture\" in 2018 [1]. Process technology advances and general-purpose processor improvements provided much faster and more efficient computation, but scaling with Moore's law has slowed significantly. Domain-specific customization can improve power-performance efficiency by orders-of-magnitude for important application domains, such as graphics, deep neural networks (DNN) for machine learning [2], simulation, bioinformatics [3], image processing, and many other tasks. The common features of domain-specific architectures are: 1) dedicated memories to minimize data movement across chip; 2) more arithmetic units or bigger memories; 3) use of parallelism matching the domain; 4) smaller data types appropriate for the target applications; and 5) domain-specific software languages. Expediting software development with optimized compilation for efficient fast computation on heterogeneous architectures is a difficult task, and must be considered with the hardware design. For example, GPU programming has used CUDA and OpenCL. The hardware comprises application-specific integrated circuits (ASICs) [4] and systems-of-chips (SoCs). General-purpose processor cores are often combined with graphics processing units (GPUs) for stream processing, digital signal processors, field programmable gate arrays (FPGAs) for configurability [5], artificial intelligence (AI) acceleration hardware, and so forth. Domain-specific computers have been deployed recently. For example: the Google Tensor Processing Unit (DNN ASIC) [6]; Microsoft Catapult (FPGA-based cloud domain-service solution) [7]; Intel Crest (DNN ASIC) [8]; Google Pixel Visual Core (image processing and computer vision for cell phones and tablets) [9]; and the RISC-V architecture and open instruction set for heterogeneous computing [10].","PeriodicalId":169581,"journal":{"name":"Proceedings of the 2023 International Symposium on Physical Design","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EDA for Domain Specific Computing: An Introduction for the Panel\",\"authors\":\"I. Jiang, D. Chinnery\",\"doi\":\"10.1145/3569052.3580221\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This panel explores domain-specific computing from hardware, software, and electronic design automation (EDA) perspectives. Hennessey and Patterson signaled a new \\\"golden age of computer architecture\\\" in 2018 [1]. Process technology advances and general-purpose processor improvements provided much faster and more efficient computation, but scaling with Moore's law has slowed significantly. Domain-specific customization can improve power-performance efficiency by orders-of-magnitude for important application domains, such as graphics, deep neural networks (DNN) for machine learning [2], simulation, bioinformatics [3], image processing, and many other tasks. The common features of domain-specific architectures are: 1) dedicated memories to minimize data movement across chip; 2) more arithmetic units or bigger memories; 3) use of parallelism matching the domain; 4) smaller data types appropriate for the target applications; and 5) domain-specific software languages. Expediting software development with optimized compilation for efficient fast computation on heterogeneous architectures is a difficult task, and must be considered with the hardware design. For example, GPU programming has used CUDA and OpenCL. The hardware comprises application-specific integrated circuits (ASICs) [4] and systems-of-chips (SoCs). General-purpose processor cores are often combined with graphics processing units (GPUs) for stream processing, digital signal processors, field programmable gate arrays (FPGAs) for configurability [5], artificial intelligence (AI) acceleration hardware, and so forth. Domain-specific computers have been deployed recently. For example: the Google Tensor Processing Unit (DNN ASIC) [6]; Microsoft Catapult (FPGA-based cloud domain-service solution) [7]; Intel Crest (DNN ASIC) [8]; Google Pixel Visual Core (image processing and computer vision for cell phones and tablets) [9]; and the RISC-V architecture and open instruction set for heterogeneous computing [10].\",\"PeriodicalId\":169581,\"journal\":{\"name\":\"Proceedings of the 2023 International Symposium on Physical Design\",\"volume\":\"90 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 International Symposium on Physical Design\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3569052.3580221\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 International Symposium on Physical Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3569052.3580221","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

这个小组从硬件、软件和电子设计自动化(EDA)的角度探讨特定领域的计算。Hennessey和Patterson在2018年预示着一个新的“计算机架构的黄金时代”[1]。处理技术的进步和通用处理器的改进提供了更快、更高效的计算,但摩尔定律的扩展速度明显放缓。特定领域的定制可以提高重要应用领域的功率性能效率,例如图形、用于机器学习的深度神经网络(DNN)[2]、仿真、生物信息学[3]、图像处理和许多其他任务。特定领域架构的共同特点是:1)专用存储器,以尽量减少跨芯片的数据移动;2)更多的算术单元或更大的内存;3)利用并行度匹配域;4)适合目标应用程序的较小数据类型;5)领域特定的软件语言。在异构架构下,通过优化编译来加快软件开发速度是一项艰巨的任务,必须在硬件设计中加以考虑。例如,GPU编程使用了CUDA和OpenCL。硬件包括专用集成电路(asic)[4]和芯片系统(soc)。通用处理器核心通常与用于流处理的图形处理单元(gpu)、用于可配置性的数字信号处理器、现场可编程门阵列(fpga)、人工智能(AI)加速硬件等相结合。最近已经部署了特定于领域的计算机。例如:Google张量处理单元(DNN ASIC) [6];Microsoft Catapult(基于fpga的云域服务解决方案)[7];Intel Crest (DNN ASIC) [8];Google Pixel Visual Core(手机和平板电脑的图像处理和计算机视觉)[9];以及面向异构计算的RISC-V架构和开放指令集[10]。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
EDA for Domain Specific Computing: An Introduction for the Panel
This panel explores domain-specific computing from hardware, software, and electronic design automation (EDA) perspectives. Hennessey and Patterson signaled a new "golden age of computer architecture" in 2018 [1]. Process technology advances and general-purpose processor improvements provided much faster and more efficient computation, but scaling with Moore's law has slowed significantly. Domain-specific customization can improve power-performance efficiency by orders-of-magnitude for important application domains, such as graphics, deep neural networks (DNN) for machine learning [2], simulation, bioinformatics [3], image processing, and many other tasks. The common features of domain-specific architectures are: 1) dedicated memories to minimize data movement across chip; 2) more arithmetic units or bigger memories; 3) use of parallelism matching the domain; 4) smaller data types appropriate for the target applications; and 5) domain-specific software languages. Expediting software development with optimized compilation for efficient fast computation on heterogeneous architectures is a difficult task, and must be considered with the hardware design. For example, GPU programming has used CUDA and OpenCL. The hardware comprises application-specific integrated circuits (ASICs) [4] and systems-of-chips (SoCs). General-purpose processor cores are often combined with graphics processing units (GPUs) for stream processing, digital signal processors, field programmable gate arrays (FPGAs) for configurability [5], artificial intelligence (AI) acceleration hardware, and so forth. Domain-specific computers have been deployed recently. For example: the Google Tensor Processing Unit (DNN ASIC) [6]; Microsoft Catapult (FPGA-based cloud domain-service solution) [7]; Intel Crest (DNN ASIC) [8]; Google Pixel Visual Core (image processing and computer vision for cell phones and tablets) [9]; and the RISC-V architecture and open instruction set for heterogeneous computing [10].
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信