{"title":"用于训练简化双极形态神经网络的逐层知识蒸馏法","authors":"M. V. Zingerenko, E. E. Limonova","doi":"10.1134/s0361768823100080","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Various neuron approximations can be used to reduce the computational complexity of neural networks. One such approximation based on summation and maximum operations is a bipolar morphological neuron. This paper presents an improved structure of the bipolar morphological neuron that enhances its computational efficiency and a new approach to training based on continuous approximations of the maximum and knowledge distillation. Experiments were carried out on the MNIST dataset using a LeNet-like neural network architecture and on the CIFAR10 dataset using a ResNet-22 model architecture. The proposed training method achieves 99.45% classification accuracy on the LeNet-like model (the same accuracy as that provided by the classical network) and 86.69% accuracy on the ResNet-22 model compared with 86.43% accuracy of the classical model. The results show that the proposed method with log-sum-exp (LSE) approximation of the maximum and layer-by-layer knowledge distillation makes it possible to obtain a simplified bipolar morphological network that is not inferior to the classical networks.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Layer-by-Layer Knowledge Distillation for Training Simplified Bipolar Morphological Neural Networks\",\"authors\":\"M. V. Zingerenko, E. E. Limonova\",\"doi\":\"10.1134/s0361768823100080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3 data-test=\\\"abstract-sub-heading\\\">Abstract</h3><p>Various neuron approximations can be used to reduce the computational complexity of neural networks. One such approximation based on summation and maximum operations is a bipolar morphological neuron. This paper presents an improved structure of the bipolar morphological neuron that enhances its computational efficiency and a new approach to training based on continuous approximations of the maximum and knowledge distillation. Experiments were carried out on the MNIST dataset using a LeNet-like neural network architecture and on the CIFAR10 dataset using a ResNet-22 model architecture. The proposed training method achieves 99.45% classification accuracy on the LeNet-like model (the same accuracy as that provided by the classical network) and 86.69% accuracy on the ResNet-22 model compared with 86.43% accuracy of the classical model. The results show that the proposed method with log-sum-exp (LSE) approximation of the maximum and layer-by-layer knowledge distillation makes it possible to obtain a simplified bipolar morphological network that is not inferior to the classical networks.</p>\",\"PeriodicalId\":54555,\"journal\":{\"name\":\"Programming and Computer Software\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2024-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Programming and Computer Software\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1134/s0361768823100080\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Programming and Computer Software","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1134/s0361768823100080","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Layer-by-Layer Knowledge Distillation for Training Simplified Bipolar Morphological Neural Networks
Abstract
Various neuron approximations can be used to reduce the computational complexity of neural networks. One such approximation based on summation and maximum operations is a bipolar morphological neuron. This paper presents an improved structure of the bipolar morphological neuron that enhances its computational efficiency and a new approach to training based on continuous approximations of the maximum and knowledge distillation. Experiments were carried out on the MNIST dataset using a LeNet-like neural network architecture and on the CIFAR10 dataset using a ResNet-22 model architecture. The proposed training method achieves 99.45% classification accuracy on the LeNet-like model (the same accuracy as that provided by the classical network) and 86.69% accuracy on the ResNet-22 model compared with 86.43% accuracy of the classical model. The results show that the proposed method with log-sum-exp (LSE) approximation of the maximum and layer-by-layer knowledge distillation makes it possible to obtain a simplified bipolar morphological network that is not inferior to the classical networks.
期刊介绍:
Programming and Computer Software is a peer reviewed journal devoted to problems in all areas of computer science: operating systems, compiler technology, software engineering, artificial intelligence, etc.