R. Soloviev, D. Telpukhov, I. Mkrtchan, A. Kustov, A. Stempkovskiy
{"title":"基于剩余数系统的卷积神经网络硬件实现","authors":"R. Soloviev, D. Telpukhov, I. Mkrtchan, A. Kustov, A. Stempkovskiy","doi":"10.1109/MWENT47943.2020.9067498","DOIUrl":null,"url":null,"abstract":"The paper examines the use of residue number system (RNS) for hardware implementation of neural networks based on VLSI or FPGA. Widely known mobile neural networks that are highly accurate and best suited for implementation at hardware level have been explored. Major difficulties in their RNS implementationare examined. Several methods for solving the related problems are proposed: convolutions with step value greater than one, rather than previously used MaxPooling layers, are considered; non-standard activation functions containing only addition, subtraction and multiplication are investigated; efficient algorithm for scaling implementation is proposed and compared with conventional algorithm. Finally, we propose complete flow for design and transfer of MobileNet neural network to hardware level on RNS basis.","PeriodicalId":122716,"journal":{"name":"2020 Moscow Workshop on Electronic and Networking Technologies (MWENT)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Hardware Implementation of Convolutional Neural Networks Based on Residue Number System\",\"authors\":\"R. Soloviev, D. Telpukhov, I. Mkrtchan, A. Kustov, A. Stempkovskiy\",\"doi\":\"10.1109/MWENT47943.2020.9067498\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The paper examines the use of residue number system (RNS) for hardware implementation of neural networks based on VLSI or FPGA. Widely known mobile neural networks that are highly accurate and best suited for implementation at hardware level have been explored. Major difficulties in their RNS implementationare examined. Several methods for solving the related problems are proposed: convolutions with step value greater than one, rather than previously used MaxPooling layers, are considered; non-standard activation functions containing only addition, subtraction and multiplication are investigated; efficient algorithm for scaling implementation is proposed and compared with conventional algorithm. Finally, we propose complete flow for design and transfer of MobileNet neural network to hardware level on RNS basis.\",\"PeriodicalId\":122716,\"journal\":{\"name\":\"2020 Moscow Workshop on Electronic and Networking Technologies (MWENT)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 Moscow Workshop on Electronic and Networking Technologies (MWENT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MWENT47943.2020.9067498\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Moscow Workshop on Electronic and Networking Technologies (MWENT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MWENT47943.2020.9067498","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hardware Implementation of Convolutional Neural Networks Based on Residue Number System
The paper examines the use of residue number system (RNS) for hardware implementation of neural networks based on VLSI or FPGA. Widely known mobile neural networks that are highly accurate and best suited for implementation at hardware level have been explored. Major difficulties in their RNS implementationare examined. Several methods for solving the related problems are proposed: convolutions with step value greater than one, rather than previously used MaxPooling layers, are considered; non-standard activation functions containing only addition, subtraction and multiplication are investigated; efficient algorithm for scaling implementation is proposed and compared with conventional algorithm. Finally, we propose complete flow for design and transfer of MobileNet neural network to hardware level on RNS basis.