METHOD FOR REDUCTIVE PRUNING OF NEURAL NETWORKS AND ITS APPLICATIONS

O. Gurbych, Maksym Prymachenko
{"title":"METHOD FOR REDUCTIVE PRUNING OF NEURAL NETWORKS AND ITS APPLICATIONS","authors":"O. Gurbych, Maksym Prymachenko","doi":"10.31891/csit-2022-3-5","DOIUrl":null,"url":null,"abstract":"Trained neural networks usually contain redundant neurons that do not affect or degrade the quality of target identification. The redundancy of models possesses an excessive load on computational resources, leading to high electricity consumption. Further, the deployment and operation of such models in resource-constrained environments such as mobile phones or smart devices are either complicated or impossible. Therefore, there is a need to simplify models while maintaining their effectiveness. This work presents a method for fast neural network reduction, allowing for automatic detection and removal of a large number of redundant neurons while simultaneously improving the efficiency of the models. The technique introduces perturbations to the target variable and then identifies and removes the weights with the most considerable relative deviations from the weights of the control model. The method removes up to 90% of active weights. At the same time, unlike classical pruning methods, the efficiency of models improves simultaneously with the reduction. The scientific novelty of the work consists of method development and new practical applications. The reduction method detects and removes large groups of redundant parameters of neural networks. The logic of automatically determining the optimal number of residual \"significant\" weights was implemented. The mentioned features speed up the discovery and elimination of redundant weights; reduce required time and resources for computations; and automate the identification of the essential neurons. The method's effectiveness was demonstrated on two applied tasks: predicting the yield of chemical reactions and the molecular affinity. The implementation and applications of the method are available via the link: https://github.com/ogurbych/ann-reduction.","PeriodicalId":353631,"journal":{"name":"Computer systems and information technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer systems and information technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31891/csit-2022-3-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Trained neural networks usually contain redundant neurons that do not affect or degrade the quality of target identification. The redundancy of models possesses an excessive load on computational resources, leading to high electricity consumption. Further, the deployment and operation of such models in resource-constrained environments such as mobile phones or smart devices are either complicated or impossible. Therefore, there is a need to simplify models while maintaining their effectiveness. This work presents a method for fast neural network reduction, allowing for automatic detection and removal of a large number of redundant neurons while simultaneously improving the efficiency of the models. The technique introduces perturbations to the target variable and then identifies and removes the weights with the most considerable relative deviations from the weights of the control model. The method removes up to 90% of active weights. At the same time, unlike classical pruning methods, the efficiency of models improves simultaneously with the reduction. The scientific novelty of the work consists of method development and new practical applications. The reduction method detects and removes large groups of redundant parameters of neural networks. The logic of automatically determining the optimal number of residual "significant" weights was implemented. The mentioned features speed up the discovery and elimination of redundant weights; reduce required time and resources for computations; and automate the identification of the essential neurons. The method's effectiveness was demonstrated on two applied tasks: predicting the yield of chemical reactions and the molecular affinity. The implementation and applications of the method are available via the link: https://github.com/ogurbych/ann-reduction.
神经网络的约简剪枝方法及其应用
经过训练的神经网络通常包含不影响或降低目标识别质量的冗余神经元。模型的冗余会对计算资源造成过大的负荷,导致大量的电力消耗。此外,在移动电话或智能设备等资源受限的环境中,这些模型的部署和操作要么复杂,要么不可能。因此,有必要在保持模型有效性的同时简化模型。本工作提出了一种快速神经网络约简方法,允许自动检测和去除大量冗余神经元,同时提高模型的效率。该技术向目标变量引入扰动,然后识别并去除与控制模型权重相对偏差最大的权重。该方法可去除高达90%的活动权重。同时,与传统的剪枝方法不同,模型的效率随着约简的增加而提高。这项工作的科学新颖性包括方法的发展和新的实际应用。该方法检测并去除神经网络中大量的冗余参数。实现了自动确定残差“有效”权值最优个数的逻辑。上述特征加速了冗余权重的发现和消除;减少计算所需的时间和资源;自动识别基本神经元。该方法在预测化学反应产率和分子亲和度两个应用任务上证明了其有效性。该方法的实现和应用程序可通过链接:https://github.com/ogurbych/ann-reduction获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信