{"title":"基于FPGA的支持向量机节能嵌入式推理","authors":"O. Elgawi, A. Mutawa, Afaq Ahmad","doi":"10.1109/ISVLSI.2019.00038","DOIUrl":null,"url":null,"abstract":"We propose an energy-efficient embedded binarized Support Vector Machine (eBSVM) architecture and present its implementation on low-power FPGA accelerator. With binarized input activations and output weights, the dot product operation (float-point multiplications and additions) can be replaced by bitwise XNOR and popcount operations, respectively. The proposed accelerator computes the two binarized vectors using hamming weights, resulting in reduced execution time and energy consumption. Evaluation results show that eBSVM demonstrates performance and performance-per-Watt on MNIST and CIFAR-10 datasets compared to its fixed point (FP) counterpart implemented in CPU and GPU with small accuracy degradation.","PeriodicalId":6703,"journal":{"name":"2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","volume":"64 12","pages":"164-168"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Energy-Efficient Embedded Inference of SVMs on FPGA\",\"authors\":\"O. Elgawi, A. Mutawa, Afaq Ahmad\",\"doi\":\"10.1109/ISVLSI.2019.00038\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose an energy-efficient embedded binarized Support Vector Machine (eBSVM) architecture and present its implementation on low-power FPGA accelerator. With binarized input activations and output weights, the dot product operation (float-point multiplications and additions) can be replaced by bitwise XNOR and popcount operations, respectively. The proposed accelerator computes the two binarized vectors using hamming weights, resulting in reduced execution time and energy consumption. Evaluation results show that eBSVM demonstrates performance and performance-per-Watt on MNIST and CIFAR-10 datasets compared to its fixed point (FP) counterpart implemented in CPU and GPU with small accuracy degradation.\",\"PeriodicalId\":6703,\"journal\":{\"name\":\"2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)\",\"volume\":\"64 12\",\"pages\":\"164-168\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISVLSI.2019.00038\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISVLSI.2019.00038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Energy-Efficient Embedded Inference of SVMs on FPGA
We propose an energy-efficient embedded binarized Support Vector Machine (eBSVM) architecture and present its implementation on low-power FPGA accelerator. With binarized input activations and output weights, the dot product operation (float-point multiplications and additions) can be replaced by bitwise XNOR and popcount operations, respectively. The proposed accelerator computes the two binarized vectors using hamming weights, resulting in reduced execution time and energy consumption. Evaluation results show that eBSVM demonstrates performance and performance-per-Watt on MNIST and CIFAR-10 datasets compared to its fixed point (FP) counterpart implemented in CPU and GPU with small accuracy degradation.