AutoML for Multilayer Perceptron and FPGA Co-design

Philip Colangelo, Oren Segal, Alexander Speicher, M. Margala
{"title":"AutoML for Multilayer Perceptron and FPGA Co-design","authors":"Philip Colangelo, Oren Segal, Alexander Speicher, M. Margala","doi":"10.1109/socc49529.2020.9524785","DOIUrl":null,"url":null,"abstract":"Optimizing neural network architectures (NNA) is a difficult process in part because of the vast number of hyperparameter combinations that exist. The difficulty in designing performant neural networks has brought a recent surge in interest in the automatic design and optimization of neural networks. The focus of the existing body of research has been on optimizing NNA for accuracy [1] [2] with publications starting to address hardware optimizations [3]. Our focus is to close this gap by using evolutionary algorithms to search an entire design space, including NNA and reconfigurable hardware. Large data-centric companies such as Facebook[4] [5] and Google [6] have published data showing that MLP workloads are the majority of their application base. Facebook cites the use of MLP for tasks such as determining which ads to display, which stories matter to see in a news feed, and which results to present from a search. Park et al. stress the importance of these networks and the current limitations on standard hardware and the call for what this research aims to solve, i.e., software and hardware co-design in [7]. Our research aims to take advantage of the reconfigurable architecture of an FPGA device that is capable of molding to a specific workload and neural network structure. Leveraging evolutionary algorithms to search the entire design space of both MLP and target hardware simultaneously, we find unique solutions that achieve both top accuracy and optimal hardware performance.","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"06 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/socc49529.2020.9524785","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Optimizing neural network architectures (NNA) is a difficult process in part because of the vast number of hyperparameter combinations that exist. The difficulty in designing performant neural networks has brought a recent surge in interest in the automatic design and optimization of neural networks. The focus of the existing body of research has been on optimizing NNA for accuracy [1] [2] with publications starting to address hardware optimizations [3]. Our focus is to close this gap by using evolutionary algorithms to search an entire design space, including NNA and reconfigurable hardware. Large data-centric companies such as Facebook[4] [5] and Google [6] have published data showing that MLP workloads are the majority of their application base. Facebook cites the use of MLP for tasks such as determining which ads to display, which stories matter to see in a news feed, and which results to present from a search. Park et al. stress the importance of these networks and the current limitations on standard hardware and the call for what this research aims to solve, i.e., software and hardware co-design in [7]. Our research aims to take advantage of the reconfigurable architecture of an FPGA device that is capable of molding to a specific workload and neural network structure. Leveraging evolutionary algorithms to search the entire design space of both MLP and target hardware simultaneously, we find unique solutions that achieve both top accuracy and optimal hardware performance.
多层感知器的自动化与FPGA协同设计
优化神经网络结构(NNA)是一个困难的过程,部分原因是存在大量的超参数组合。设计高性能神经网络的困难引起了人们对神经网络自动设计和优化的兴趣。现有研究的重点一直是优化NNA的准确性[1][2],出版物开始解决硬件优化问题[3]。我们的重点是通过使用进化算法来搜索整个设计空间,包括NNA和可重构硬件来缩小这一差距。Facebook[4][5]和Google[6]等以数据为中心的大型公司发布的数据显示,MLP工作负载占其应用程序基础的大部分。Facebook将MLP应用于决定展示哪些广告、在信息流中看到哪些重要的故事,以及在搜索中呈现哪些结果等任务。Park等人强调了这些网络的重要性,以及当前标准硬件的局限性,并呼吁本研究旨在解决的问题,即[7]中的软件和硬件协同设计。我们的研究旨在利用FPGA器件的可重构架构,使其能够成型特定的工作负载和神经网络结构。利用进化算法同时搜索MLP和目标硬件的整个设计空间,我们找到了实现最高精度和最佳硬件性能的独特解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信