E. S. Skibinsky-Gitlin, M. Alomar, E. Isern, M. Roca, V. Canals, J. Rosselló
{"title":"用于时间序列预测的油藏计算硬件","authors":"E. S. Skibinsky-Gitlin, M. Alomar, E. Isern, M. Roca, V. Canals, J. Rosselló","doi":"10.1109/PATMOS.2018.8463994","DOIUrl":null,"url":null,"abstract":"Hardware implementation of Recurrent neural networks are able to increase the computing capacity in relation to software, so it can be of high interest when ultra-high speed processing is a requirement. However, the traditional hardware realization of neural networks has a cost in terms of power dissipation and circuit area due to the need of implementing a large quantity of binary multipliers as part of the synapses process. In this paper, a recurrent neural network scheme known as simple cyclic reservoir is implemented for time series processing. Synapses are implemented using single shift-add operations that maintains a similar accuracy with respect to full multipliers but with high savings in terms of area and power. The network architecture takes advantage of the fixed connectivity of the reservoir that only modifies the output layer of the network. Such design is synthesized in a digital circuitry, evaluated for a time-series benchmark prediction task and compared with previously published hardware implementation of a Reservoir Computing systems.","PeriodicalId":234100,"journal":{"name":"2018 28th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Reservoir Computing Hardware for Time Series Forecasting\",\"authors\":\"E. S. Skibinsky-Gitlin, M. Alomar, E. Isern, M. Roca, V. Canals, J. Rosselló\",\"doi\":\"10.1109/PATMOS.2018.8463994\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hardware implementation of Recurrent neural networks are able to increase the computing capacity in relation to software, so it can be of high interest when ultra-high speed processing is a requirement. However, the traditional hardware realization of neural networks has a cost in terms of power dissipation and circuit area due to the need of implementing a large quantity of binary multipliers as part of the synapses process. In this paper, a recurrent neural network scheme known as simple cyclic reservoir is implemented for time series processing. Synapses are implemented using single shift-add operations that maintains a similar accuracy with respect to full multipliers but with high savings in terms of area and power. The network architecture takes advantage of the fixed connectivity of the reservoir that only modifies the output layer of the network. Such design is synthesized in a digital circuitry, evaluated for a time-series benchmark prediction task and compared with previously published hardware implementation of a Reservoir Computing systems.\",\"PeriodicalId\":234100,\"journal\":{\"name\":\"2018 28th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 28th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PATMOS.2018.8463994\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 28th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PATMOS.2018.8463994","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Reservoir Computing Hardware for Time Series Forecasting
Hardware implementation of Recurrent neural networks are able to increase the computing capacity in relation to software, so it can be of high interest when ultra-high speed processing is a requirement. However, the traditional hardware realization of neural networks has a cost in terms of power dissipation and circuit area due to the need of implementing a large quantity of binary multipliers as part of the synapses process. In this paper, a recurrent neural network scheme known as simple cyclic reservoir is implemented for time series processing. Synapses are implemented using single shift-add operations that maintains a similar accuracy with respect to full multipliers but with high savings in terms of area and power. The network architecture takes advantage of the fixed connectivity of the reservoir that only modifies the output layer of the network. Such design is synthesized in a digital circuitry, evaluated for a time-series benchmark prediction task and compared with previously published hardware implementation of a Reservoir Computing systems.