{"title":"人工神经网络的前向计算模拟实现","authors":"S. Mada, Srinivas B. Mandalika","doi":"10.1109/AMS.2017.10","DOIUrl":null,"url":null,"abstract":"The algorithm used to train an Artificial Neural Network (ANN) plays an important role in its implementation. Analog VLSI implementations of ANN using back propagation algorithm for multi-layer perceptron (MLP) architectures were reported earlier. In this paper, we used an algorithm which uses forward only computation to update the weights, instead of forward and backward computation resulting in reduced computation time. The chosen algorithm, can train all types of architectures in less time, even where back propagation and other second order algorithms fail. An analog VLSI implementation of this algorithm can further reduce the area and power dissipation. To validate our idea, we designed and implemented a two input-one hidden layer-one output MLP network. All the blocks were implemented in CADENCE Virtuoso tool using the 180nm technology library. The resultant network architecture was tested successfully for digital applications like AND, OR and analog applications - compression and decompression","PeriodicalId":219494,"journal":{"name":"2017 Asia Modelling Symposium (AMS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Analog Implementation of Artificial Neural Networks Using Forward Only Computation\",\"authors\":\"S. Mada, Srinivas B. Mandalika\",\"doi\":\"10.1109/AMS.2017.10\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The algorithm used to train an Artificial Neural Network (ANN) plays an important role in its implementation. Analog VLSI implementations of ANN using back propagation algorithm for multi-layer perceptron (MLP) architectures were reported earlier. In this paper, we used an algorithm which uses forward only computation to update the weights, instead of forward and backward computation resulting in reduced computation time. The chosen algorithm, can train all types of architectures in less time, even where back propagation and other second order algorithms fail. An analog VLSI implementation of this algorithm can further reduce the area and power dissipation. To validate our idea, we designed and implemented a two input-one hidden layer-one output MLP network. All the blocks were implemented in CADENCE Virtuoso tool using the 180nm technology library. The resultant network architecture was tested successfully for digital applications like AND, OR and analog applications - compression and decompression\",\"PeriodicalId\":219494,\"journal\":{\"name\":\"2017 Asia Modelling Symposium (AMS)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 Asia Modelling Symposium (AMS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AMS.2017.10\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Asia Modelling Symposium (AMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AMS.2017.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analog Implementation of Artificial Neural Networks Using Forward Only Computation
The algorithm used to train an Artificial Neural Network (ANN) plays an important role in its implementation. Analog VLSI implementations of ANN using back propagation algorithm for multi-layer perceptron (MLP) architectures were reported earlier. In this paper, we used an algorithm which uses forward only computation to update the weights, instead of forward and backward computation resulting in reduced computation time. The chosen algorithm, can train all types of architectures in less time, even where back propagation and other second order algorithms fail. An analog VLSI implementation of this algorithm can further reduce the area and power dissipation. To validate our idea, we designed and implemented a two input-one hidden layer-one output MLP network. All the blocks were implemented in CADENCE Virtuoso tool using the 180nm technology library. The resultant network architecture was tested successfully for digital applications like AND, OR and analog applications - compression and decompression