Philip Colangelo, Oren Segal, Alexander Speicher, M. Margala
{"title":"AutoML for Multilayer Perceptron and FPGA Co-design","authors":"Philip Colangelo, Oren Segal, Alexander Speicher, M. Margala","doi":"10.1109/socc49529.2020.9524785","DOIUrl":null,"url":null,"abstract":"Optimizing neural network architectures (NNA) is a difficult process in part because of the vast number of hyperparameter combinations that exist. The difficulty in designing performant neural networks has brought a recent surge in interest in the automatic design and optimization of neural networks. The focus of the existing body of research has been on optimizing NNA for accuracy [1] [2] with publications starting to address hardware optimizations [3]. Our focus is to close this gap by using evolutionary algorithms to search an entire design space, including NNA and reconfigurable hardware. Large data-centric companies such as Facebook[4] [5] and Google [6] have published data showing that MLP workloads are the majority of their application base. Facebook cites the use of MLP for tasks such as determining which ads to display, which stories matter to see in a news feed, and which results to present from a search. Park et al. stress the importance of these networks and the current limitations on standard hardware and the call for what this research aims to solve, i.e., software and hardware co-design in [7]. Our research aims to take advantage of the reconfigurable architecture of an FPGA device that is capable of molding to a specific workload and neural network structure. Leveraging evolutionary algorithms to search the entire design space of both MLP and target hardware simultaneously, we find unique solutions that achieve both top accuracy and optimal hardware performance.","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"06 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/socc49529.2020.9524785","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Optimizing neural network architectures (NNA) is a difficult process in part because of the vast number of hyperparameter combinations that exist. The difficulty in designing performant neural networks has brought a recent surge in interest in the automatic design and optimization of neural networks. The focus of the existing body of research has been on optimizing NNA for accuracy [1] [2] with publications starting to address hardware optimizations [3]. Our focus is to close this gap by using evolutionary algorithms to search an entire design space, including NNA and reconfigurable hardware. Large data-centric companies such as Facebook[4] [5] and Google [6] have published data showing that MLP workloads are the majority of their application base. Facebook cites the use of MLP for tasks such as determining which ads to display, which stories matter to see in a news feed, and which results to present from a search. Park et al. stress the importance of these networks and the current limitations on standard hardware and the call for what this research aims to solve, i.e., software and hardware co-design in [7]. Our research aims to take advantage of the reconfigurable architecture of an FPGA device that is capable of molding to a specific workload and neural network structure. Leveraging evolutionary algorithms to search the entire design space of both MLP and target hardware simultaneously, we find unique solutions that achieve both top accuracy and optimal hardware performance.