J. K. Kim, Phil C. Knag, Thomas Chen, Chester Liu, Ching-En Lee, Zhengya Zhang
{"title":"用于嵌入式计算机视觉应用的高性能峰值神经网络加速器","authors":"J. K. Kim, Phil C. Knag, Thomas Chen, Chester Liu, Ching-En Lee, Zhengya Zhang","doi":"10.1109/S3S.2017.8309204","DOIUrl":null,"url":null,"abstract":"One key component in computer vision algorithms involves developing and identifying relevant features from raw data. In this work, we designed spiking recurrent neural net accelerators to implement a class of unsupervised machine learning algorithms known as sparse coding. The accelerators perform fast unsupervised learning of features, and extract sparse representations of inputs for low-power classification. Taking advantage of high sparsity, spiking neurons, and error tolerance, the compact accelerator chips are capable of processing images at several hundred megapixels per second, while dissipating less than 10 mW. The accelerators can be embedded in sensors as frontend processors for feature learning, encoding, and compression.","PeriodicalId":333587,"journal":{"name":"2017 IEEE SOI-3D-Subthreshold Microelectronics Technology Unified Conference (S3S)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"High-performance spiking neural net accelerators for embedded computer vision applications\",\"authors\":\"J. K. Kim, Phil C. Knag, Thomas Chen, Chester Liu, Ching-En Lee, Zhengya Zhang\",\"doi\":\"10.1109/S3S.2017.8309204\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One key component in computer vision algorithms involves developing and identifying relevant features from raw data. In this work, we designed spiking recurrent neural net accelerators to implement a class of unsupervised machine learning algorithms known as sparse coding. The accelerators perform fast unsupervised learning of features, and extract sparse representations of inputs for low-power classification. Taking advantage of high sparsity, spiking neurons, and error tolerance, the compact accelerator chips are capable of processing images at several hundred megapixels per second, while dissipating less than 10 mW. The accelerators can be embedded in sensors as frontend processors for feature learning, encoding, and compression.\",\"PeriodicalId\":333587,\"journal\":{\"name\":\"2017 IEEE SOI-3D-Subthreshold Microelectronics Technology Unified Conference (S3S)\",\"volume\":\"77 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE SOI-3D-Subthreshold Microelectronics Technology Unified Conference (S3S)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/S3S.2017.8309204\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE SOI-3D-Subthreshold Microelectronics Technology Unified Conference (S3S)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/S3S.2017.8309204","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
High-performance spiking neural net accelerators for embedded computer vision applications
One key component in computer vision algorithms involves developing and identifying relevant features from raw data. In this work, we designed spiking recurrent neural net accelerators to implement a class of unsupervised machine learning algorithms known as sparse coding. The accelerators perform fast unsupervised learning of features, and extract sparse representations of inputs for low-power classification. Taking advantage of high sparsity, spiking neurons, and error tolerance, the compact accelerator chips are capable of processing images at several hundred megapixels per second, while dissipating less than 10 mW. The accelerators can be embedded in sensors as frontend processors for feature learning, encoding, and compression.