{"title":"微扰二值神经网络","authors":"Vlad Pelin, I. Radoi","doi":"10.1109/ROEDUNET.2019.8909493","DOIUrl":null,"url":null,"abstract":"Research into deep neural networks has brought about architectures and models that solve problems we once thought could not be approached by machine learning. Year after year, performance improves, to the point that it is becoming difficult to differentiate between the strengths of deep neural network models given our current data sets. However, due to their significant requirements in terms of hardware resources, all but few architectures are dependent on cloud environments. Yet, there are many use cases for neural networks in a variety of areas, many of which require consumer-grade hardware or highly resource constrained embedded devices. This paper offers a comparison of selected state-of-the-art neural network miniaturization methods, and proposes a new approach, PXNOR, that achieves a noteworthy accuracy, remarkable inference speed and significant memory savings. PXNOR seeks to fully replace traditional convolutional filters with approximate operations, while replacing all multiplications and additions with simpler, much faster versions such as XNOR and bitcounting, which are implemented at hardware level on all existing platforms.","PeriodicalId":309683,"journal":{"name":"2019 18th RoEduNet Conference: Networking in Education and Research (RoEduNet)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PXNOR: Perturbative Binary Neural Network\",\"authors\":\"Vlad Pelin, I. Radoi\",\"doi\":\"10.1109/ROEDUNET.2019.8909493\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Research into deep neural networks has brought about architectures and models that solve problems we once thought could not be approached by machine learning. Year after year, performance improves, to the point that it is becoming difficult to differentiate between the strengths of deep neural network models given our current data sets. However, due to their significant requirements in terms of hardware resources, all but few architectures are dependent on cloud environments. Yet, there are many use cases for neural networks in a variety of areas, many of which require consumer-grade hardware or highly resource constrained embedded devices. This paper offers a comparison of selected state-of-the-art neural network miniaturization methods, and proposes a new approach, PXNOR, that achieves a noteworthy accuracy, remarkable inference speed and significant memory savings. PXNOR seeks to fully replace traditional convolutional filters with approximate operations, while replacing all multiplications and additions with simpler, much faster versions such as XNOR and bitcounting, which are implemented at hardware level on all existing platforms.\",\"PeriodicalId\":309683,\"journal\":{\"name\":\"2019 18th RoEduNet Conference: Networking in Education and Research (RoEduNet)\",\"volume\":\"118 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 18th RoEduNet Conference: Networking in Education and Research (RoEduNet)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROEDUNET.2019.8909493\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 18th RoEduNet Conference: Networking in Education and Research (RoEduNet)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROEDUNET.2019.8909493","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Research into deep neural networks has brought about architectures and models that solve problems we once thought could not be approached by machine learning. Year after year, performance improves, to the point that it is becoming difficult to differentiate between the strengths of deep neural network models given our current data sets. However, due to their significant requirements in terms of hardware resources, all but few architectures are dependent on cloud environments. Yet, there are many use cases for neural networks in a variety of areas, many of which require consumer-grade hardware or highly resource constrained embedded devices. This paper offers a comparison of selected state-of-the-art neural network miniaturization methods, and proposes a new approach, PXNOR, that achieves a noteworthy accuracy, remarkable inference speed and significant memory savings. PXNOR seeks to fully replace traditional convolutional filters with approximate operations, while replacing all multiplications and additions with simpler, much faster versions such as XNOR and bitcounting, which are implemented at hardware level on all existing platforms.