谷歌对开源定制硬件开发的投资,包括免费的航天飞机项目

Tim Ansell
{"title":"谷歌对开源定制硬件开发的投资,包括免费的航天飞机项目","authors":"Tim Ansell","doi":"10.1145/3569052.3580028","DOIUrl":null,"url":null,"abstract":"The end of Moore's Law combined with unabated growth in usage have forced Google to turn to hardware acceleration to deliver efficiency gains to meet demand. Traditional hardware design methodology for accelerators is practical when there's a common core - such as with Machine Learning (ML) or video transcoding, but what about the hundreds of smaller tasks performed in Google data centers? Our vision is \"software-speed\" development for hardware acceleration so that it becomes commonplace and, frankly, boring. Toward this goal Google is investing in open tooling to foster innovation in multiplying accelerator developer productivity. Tim Ansell will provide an outline of these coordinated open source projects in EDA (including high level synthesis), IP, PDKs, and related areas. This will be followed by presenting the CFU (Custom Function Unit) Playground, which utilizes many of these projects. The CFU Playground lets you build your own specialized & optimized ML processor based on the open RISC-V ISA, implemented on an FPGA using a fully open source stack. The goal isn't general ML extensions; it's about a methodology for building your own extension specialized just for your specific tiny ML model. The extension can range from a few simple new instructions, up to a complex accelerator that interfaces to the CPU via a set of custom instructions; we will show examples of both.","PeriodicalId":169581,"journal":{"name":"Proceedings of the 2023 International Symposium on Physical Design","volume":"281 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Google Investment in Open Source Custom Hardware Development Including No-Cost Shuttle Program\",\"authors\":\"Tim Ansell\",\"doi\":\"10.1145/3569052.3580028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The end of Moore's Law combined with unabated growth in usage have forced Google to turn to hardware acceleration to deliver efficiency gains to meet demand. Traditional hardware design methodology for accelerators is practical when there's a common core - such as with Machine Learning (ML) or video transcoding, but what about the hundreds of smaller tasks performed in Google data centers? Our vision is \\\"software-speed\\\" development for hardware acceleration so that it becomes commonplace and, frankly, boring. Toward this goal Google is investing in open tooling to foster innovation in multiplying accelerator developer productivity. Tim Ansell will provide an outline of these coordinated open source projects in EDA (including high level synthesis), IP, PDKs, and related areas. This will be followed by presenting the CFU (Custom Function Unit) Playground, which utilizes many of these projects. The CFU Playground lets you build your own specialized & optimized ML processor based on the open RISC-V ISA, implemented on an FPGA using a fully open source stack. The goal isn't general ML extensions; it's about a methodology for building your own extension specialized just for your specific tiny ML model. The extension can range from a few simple new instructions, up to a complex accelerator that interfaces to the CPU via a set of custom instructions; we will show examples of both.\",\"PeriodicalId\":169581,\"journal\":{\"name\":\"Proceedings of the 2023 International Symposium on Physical Design\",\"volume\":\"281 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 International Symposium on Physical Design\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3569052.3580028\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 International Symposium on Physical Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3569052.3580028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

摩尔定律的终结,加上使用量的持续增长,迫使谷歌转向硬件加速,以提高效率,以满足需求。当有一个共同的核心时,传统的加速器硬件设计方法是实用的——比如机器学习(ML)或视频转码,但是在Google数据中心执行的数百个较小的任务呢?我们的愿景是“软件速度”的硬件加速开发,使其变得司空见惯,坦率地说,无聊。为了实现这一目标,谷歌正在投资开放工具,以促进创新,提高加速器开发人员的生产力。Tim Ansell将提供EDA(包括高级合成)、IP、pdk和相关领域的这些协调开放源码项目的概要。接下来将介绍CFU(定制功能单元)游乐场,它利用了许多这些项目。CFU Playground允许您基于开放的RISC-V ISA构建自己的专用和优化的ML处理器,并使用完全开源的堆栈在FPGA上实现。我们的目标不是一般的ML扩展;它是关于一种方法来构建您自己的扩展专门为您的特定微小的ML模型。扩展的范围可以从几个简单的新指令,到一个复杂的加速器,通过一组自定义指令接口到CPU;我们将展示这两者的例子。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Google Investment in Open Source Custom Hardware Development Including No-Cost Shuttle Program
The end of Moore's Law combined with unabated growth in usage have forced Google to turn to hardware acceleration to deliver efficiency gains to meet demand. Traditional hardware design methodology for accelerators is practical when there's a common core - such as with Machine Learning (ML) or video transcoding, but what about the hundreds of smaller tasks performed in Google data centers? Our vision is "software-speed" development for hardware acceleration so that it becomes commonplace and, frankly, boring. Toward this goal Google is investing in open tooling to foster innovation in multiplying accelerator developer productivity. Tim Ansell will provide an outline of these coordinated open source projects in EDA (including high level synthesis), IP, PDKs, and related areas. This will be followed by presenting the CFU (Custom Function Unit) Playground, which utilizes many of these projects. The CFU Playground lets you build your own specialized & optimized ML processor based on the open RISC-V ISA, implemented on an FPGA using a fully open source stack. The goal isn't general ML extensions; it's about a methodology for building your own extension specialized just for your specific tiny ML model. The extension can range from a few simple new instructions, up to a complex accelerator that interfaces to the CPU via a set of custom instructions; we will show examples of both.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信