Kristína Malinovská, L. Malinovský, Pavel Kršek, Svatopluk Kraus, I. Farkaš
{"title":"UBAL","authors":"Kristína Malinovská, L. Malinovský, Pavel Kršek, Svatopluk Kraus, I. Farkaš","doi":"10.1145/3372422.3372443","DOIUrl":null,"url":null,"abstract":"Artificial neural networks, in particular the deep end-to-end architectures trained by error backpropagation (BP), are currently the topmost used learning systems. However, learning in such systems is only loosely inspired by the actual neural mechanisms. Algorithms based on local activation differences were designed as a biologically plausible alternative to BP. We propose Universal Bidirectional Activation-based Learning, a novel neural model derived from contrastive Hebbian learning. Similarly to what is assumed about learning in the brain, our model defines a single learning rule that can perform multiple ways of learning via special hyperparameters. Unlike others, our model consists of mutually dependent, yet separate weight matrices for different directions of activation propagation. We show that UBAL can learn different tasks (such as pattern retrieval, denoising, or classification) with different setups of the learning hyperparameters. We also demonstrate the performance of our algorithm on a machine learning benchmark (MNIST). The experimental results presented in this paper confirm that UBAL is comparable with a basic version BP-trained multilayer network and the related biologically-motivated models.","PeriodicalId":118684,"journal":{"name":"Proceedings of the 2019 2nd International Conference on Computational Intelligence and Intelligent Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"UBAL\",\"authors\":\"Kristína Malinovská, L. Malinovský, Pavel Kršek, Svatopluk Kraus, I. Farkaš\",\"doi\":\"10.1145/3372422.3372443\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial neural networks, in particular the deep end-to-end architectures trained by error backpropagation (BP), are currently the topmost used learning systems. However, learning in such systems is only loosely inspired by the actual neural mechanisms. Algorithms based on local activation differences were designed as a biologically plausible alternative to BP. We propose Universal Bidirectional Activation-based Learning, a novel neural model derived from contrastive Hebbian learning. Similarly to what is assumed about learning in the brain, our model defines a single learning rule that can perform multiple ways of learning via special hyperparameters. Unlike others, our model consists of mutually dependent, yet separate weight matrices for different directions of activation propagation. We show that UBAL can learn different tasks (such as pattern retrieval, denoising, or classification) with different setups of the learning hyperparameters. We also demonstrate the performance of our algorithm on a machine learning benchmark (MNIST). The experimental results presented in this paper confirm that UBAL is comparable with a basic version BP-trained multilayer network and the related biologically-motivated models.\",\"PeriodicalId\":118684,\"journal\":{\"name\":\"Proceedings of the 2019 2nd International Conference on Computational Intelligence and Intelligent Systems\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 2nd International Conference on Computational Intelligence and Intelligent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3372422.3372443\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 2nd International Conference on Computational Intelligence and Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3372422.3372443","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Artificial neural networks, in particular the deep end-to-end architectures trained by error backpropagation (BP), are currently the topmost used learning systems. However, learning in such systems is only loosely inspired by the actual neural mechanisms. Algorithms based on local activation differences were designed as a biologically plausible alternative to BP. We propose Universal Bidirectional Activation-based Learning, a novel neural model derived from contrastive Hebbian learning. Similarly to what is assumed about learning in the brain, our model defines a single learning rule that can perform multiple ways of learning via special hyperparameters. Unlike others, our model consists of mutually dependent, yet separate weight matrices for different directions of activation propagation. We show that UBAL can learn different tasks (such as pattern retrieval, denoising, or classification) with different setups of the learning hyperparameters. We also demonstrate the performance of our algorithm on a machine learning benchmark (MNIST). The experimental results presented in this paper confirm that UBAL is comparable with a basic version BP-trained multilayer network and the related biologically-motivated models.