{"title":"神经网络二值化的新方法","authors":"J. Seo, Joonsang Yu, Jongeun Lee, Kiyoung Choi","doi":"10.1109/ISOCC.2016.7799741","DOIUrl":null,"url":null,"abstract":"As deep neural networks grow larger, they suffer from a huge number of weights, and thus reducing the overhead of handling those weights becomes one of key challenges nowadays. This paper presents a new approach to binarizing neural networks, where the weights are pruned and forced to take degenerate binary values. Experimental results show that the proposed approach achieves significant reductions in computation and power consumption at the cost of a slight accuracy loss.","PeriodicalId":278207,"journal":{"name":"2016 International SoC Design Conference (ISOCC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A new approach to binarizing neural networks\",\"authors\":\"J. Seo, Joonsang Yu, Jongeun Lee, Kiyoung Choi\",\"doi\":\"10.1109/ISOCC.2016.7799741\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As deep neural networks grow larger, they suffer from a huge number of weights, and thus reducing the overhead of handling those weights becomes one of key challenges nowadays. This paper presents a new approach to binarizing neural networks, where the weights are pruned and forced to take degenerate binary values. Experimental results show that the proposed approach achieves significant reductions in computation and power consumption at the cost of a slight accuracy loss.\",\"PeriodicalId\":278207,\"journal\":{\"name\":\"2016 International SoC Design Conference (ISOCC)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 International SoC Design Conference (ISOCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISOCC.2016.7799741\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International SoC Design Conference (ISOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISOCC.2016.7799741","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
As deep neural networks grow larger, they suffer from a huge number of weights, and thus reducing the overhead of handling those weights becomes one of key challenges nowadays. This paper presents a new approach to binarizing neural networks, where the weights are pruned and forced to take degenerate binary values. Experimental results show that the proposed approach achieves significant reductions in computation and power consumption at the cost of a slight accuracy loss.