{"title":"Introduction to the Special Section on Deep Learning in FPGAs","authors":"Deming Chen, Andrew Putnam, S. Wilton","doi":"10.1145/3294768","DOIUrl":null,"url":null,"abstract":"The rapid advance of Deep Learning (DL), especially via Deep Neural Networks (DNNs), has been shown to compete with and even exceed human capabilities in tasks such as image recognition, playing complex games, and large-scale information retrieval. However, due to the high computational and power demands of deep neural networks, hardware accelerators are essential to ensure that the computation speed meets the application requirements. Field-programmable gate arrays (FPGAs) have demonstrated great strength in accelerating deep learning inference with high energy efficiency. To explore the strength of FPGA thoroughly and create a pool of advanced representative research works, we started a call for a special issue of TRETS with the topic of DL on FPGAs. The topics of interest include many different aspects of DL on FPGAs, including compilers, tools, and design methodologies, microarchitectures, cloud deployments, edge or IoT, DNN compression, security, comparison studies, survey studies, and others. Many people answered this call and submitted their most recent research results. After a subset of the submissions was desk-rejected for quality control purposes, a total of 23 manuscripts went through a full-blown reviewing process. To facilitate a fast, fair, and effective reviewing process for this special issue, we formed a special pool of reviewers who are experts on DL and FPGA topics. After a rigorous reviewing process, eight top-quality papers have been accepted into this special issue so far. The following list shows the title of the paper and the institute(s) of the authors, and highlights the contributions of each article.","PeriodicalId":162787,"journal":{"name":"ACM Transactions on Reconfigurable Technology and Systems (TRETS)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Reconfigurable Technology and Systems (TRETS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3294768","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The rapid advance of Deep Learning (DL), especially via Deep Neural Networks (DNNs), has been shown to compete with and even exceed human capabilities in tasks such as image recognition, playing complex games, and large-scale information retrieval. However, due to the high computational and power demands of deep neural networks, hardware accelerators are essential to ensure that the computation speed meets the application requirements. Field-programmable gate arrays (FPGAs) have demonstrated great strength in accelerating deep learning inference with high energy efficiency. To explore the strength of FPGA thoroughly and create a pool of advanced representative research works, we started a call for a special issue of TRETS with the topic of DL on FPGAs. The topics of interest include many different aspects of DL on FPGAs, including compilers, tools, and design methodologies, microarchitectures, cloud deployments, edge or IoT, DNN compression, security, comparison studies, survey studies, and others. Many people answered this call and submitted their most recent research results. After a subset of the submissions was desk-rejected for quality control purposes, a total of 23 manuscripts went through a full-blown reviewing process. To facilitate a fast, fair, and effective reviewing process for this special issue, we formed a special pool of reviewers who are experts on DL and FPGA topics. After a rigorous reviewing process, eight top-quality papers have been accepted into this special issue so far. The following list shows the title of the paper and the institute(s) of the authors, and highlights the contributions of each article.