R. Rezvani, Masoud Katiraee, A. Jamalian, Shamim Mehrabi, Arash Vezvaei
{"title":"基于在线训练的多层感知器神经网络硬件设计新方法","authors":"R. Rezvani, Masoud Katiraee, A. Jamalian, Shamim Mehrabi, Arash Vezvaei","doi":"10.1109/ICCI-CC.2012.6311205","DOIUrl":null,"url":null,"abstract":"In this paper, a Multi-Layer Perceptron (MLP) has been simulated using synthesizable VHDL code. This is a well-known artificial neural network tool which is widely used for classification and function approximation problems. Our proposed model has special flexibilities and user can deter mine his/her proper parameters such as number of layers and number of neurons in each layer. The learning phase in this network model is online and after this phase, the network starts the operational phase immediately. Unlike some other similar models, in this hardware model there is no restriction on weights of the network. Weights can define as floating point type and synthesize easily. We have implemented the simulation of network described above, in two, three and four layer structure for a problem of numeric patterns recognition. The simulation results show that the network has been properly trained and can differentiate input patterns from each other with a negligible error.","PeriodicalId":427778,"journal":{"name":"2012 IEEE 11th International Conference on Cognitive Informatics and Cognitive Computing","volume":"187 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"A new method for hardware design of Multi-Layer Perceptron neural networks with online training\",\"authors\":\"R. Rezvani, Masoud Katiraee, A. Jamalian, Shamim Mehrabi, Arash Vezvaei\",\"doi\":\"10.1109/ICCI-CC.2012.6311205\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a Multi-Layer Perceptron (MLP) has been simulated using synthesizable VHDL code. This is a well-known artificial neural network tool which is widely used for classification and function approximation problems. Our proposed model has special flexibilities and user can deter mine his/her proper parameters such as number of layers and number of neurons in each layer. The learning phase in this network model is online and after this phase, the network starts the operational phase immediately. Unlike some other similar models, in this hardware model there is no restriction on weights of the network. Weights can define as floating point type and synthesize easily. We have implemented the simulation of network described above, in two, three and four layer structure for a problem of numeric patterns recognition. The simulation results show that the network has been properly trained and can differentiate input patterns from each other with a negligible error.\",\"PeriodicalId\":427778,\"journal\":{\"name\":\"2012 IEEE 11th International Conference on Cognitive Informatics and Cognitive Computing\",\"volume\":\"187 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE 11th International Conference on Cognitive Informatics and Cognitive Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCI-CC.2012.6311205\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 11th International Conference on Cognitive Informatics and Cognitive Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCI-CC.2012.6311205","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A new method for hardware design of Multi-Layer Perceptron neural networks with online training
In this paper, a Multi-Layer Perceptron (MLP) has been simulated using synthesizable VHDL code. This is a well-known artificial neural network tool which is widely used for classification and function approximation problems. Our proposed model has special flexibilities and user can deter mine his/her proper parameters such as number of layers and number of neurons in each layer. The learning phase in this network model is online and after this phase, the network starts the operational phase immediately. Unlike some other similar models, in this hardware model there is no restriction on weights of the network. Weights can define as floating point type and synthesize easily. We have implemented the simulation of network described above, in two, three and four layer structure for a problem of numeric patterns recognition. The simulation results show that the network has been properly trained and can differentiate input patterns from each other with a negligible error.