{"title":"FPL Demo: Logic Shrinkage: A Neural Architecture Search-Based Approach to FPGA Netlist Generation","authors":"Marie Auffret, Erwei Wang, James J. Davis","doi":"10.1109/FPL57034.2022.00086","DOIUrl":null,"url":null,"abstract":"Logic shrinkage is an open-source, state-of-the-art neural architecture search (NAS)-based approach to the automated design of DNN inference accelerators that ideally suit FPGA deployment [1], [2]. Where NAS traditionally sees candidate functions such as convolutions automatically evaluated and selected between to form a network, logic shrinkage operates at ultra-fine granularity, resulting in a netlist of LUTs as the topology. Our results for datasets of complexity ranging from MNIST to ImageNet show area and energy efficiency gains vs binary neural networks (BNNs) of up to ~6 x and ~ lOx.","PeriodicalId":380116,"journal":{"name":"2022 32nd International Conference on Field-Programmable Logic and Applications (FPL)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 32nd International Conference on Field-Programmable Logic and Applications (FPL)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FPL57034.2022.00086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Logic shrinkage is an open-source, state-of-the-art neural architecture search (NAS)-based approach to the automated design of DNN inference accelerators that ideally suit FPGA deployment [1], [2]. Where NAS traditionally sees candidate functions such as convolutions automatically evaluated and selected between to form a network, logic shrinkage operates at ultra-fine granularity, resulting in a netlist of LUTs as the topology. Our results for datasets of complexity ranging from MNIST to ImageNet show area and energy efficiency gains vs binary neural networks (BNNs) of up to ~6 x and ~ lOx.