具有固定偏置配置的深度神经网络

IF 1.1 Q2 MATHEMATICS, APPLIED
Harbir Antil, Thomas S. Brown, R. Lohner, F. Togashi, Deepanshu Verma
{"title":"具有固定偏置配置的深度神经网络","authors":"Harbir Antil, Thomas S. Brown, R. Lohner, F. Togashi, Deepanshu Verma","doi":"10.3934/naco.2022016","DOIUrl":null,"url":null,"abstract":"For any given neural network architecture a permutation of weights and biases results in the same functional network. This implies that optimization algorithms used to 'train' or 'learn' the network are faced with a very large number (in the millions even for small networks) of equivalent optimal solutions in the parameter space. To the best of our knowledge, this observation is absent in the literature. In order to narrow down the parameter search space, a novel technique is introduced in order to fix the bias vector configurations to be monotonically increasing. This is achieved by augmenting a typical learning problem with inequality constraints on the bias vectors in each layer. A Moreau-Yosida regularization based algorithm is proposed to handle these inequality constraints and a theoretical convergence of this algorithm is established. Applications of the proposed approach to standard trigonometric functions and more challenging stiff ordinary differential equations arising in chemically reacting flows clearly illustrate the benefits of the proposed approach. Further application of the approach on the MNIST dataset within TensorFlow, illustrate that the presented approach can be incorporated in any of the existing machine learning libraries.","PeriodicalId":44957,"journal":{"name":"Numerical Algebra Control and Optimization","volume":"33 1","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2021-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Deep neural nets with fixed bias configuration\",\"authors\":\"Harbir Antil, Thomas S. Brown, R. Lohner, F. Togashi, Deepanshu Verma\",\"doi\":\"10.3934/naco.2022016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For any given neural network architecture a permutation of weights and biases results in the same functional network. This implies that optimization algorithms used to 'train' or 'learn' the network are faced with a very large number (in the millions even for small networks) of equivalent optimal solutions in the parameter space. To the best of our knowledge, this observation is absent in the literature. In order to narrow down the parameter search space, a novel technique is introduced in order to fix the bias vector configurations to be monotonically increasing. This is achieved by augmenting a typical learning problem with inequality constraints on the bias vectors in each layer. A Moreau-Yosida regularization based algorithm is proposed to handle these inequality constraints and a theoretical convergence of this algorithm is established. Applications of the proposed approach to standard trigonometric functions and more challenging stiff ordinary differential equations arising in chemically reacting flows clearly illustrate the benefits of the proposed approach. Further application of the approach on the MNIST dataset within TensorFlow, illustrate that the presented approach can be incorporated in any of the existing machine learning libraries.\",\"PeriodicalId\":44957,\"journal\":{\"name\":\"Numerical Algebra Control and Optimization\",\"volume\":\"33 1\",\"pages\":\"\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2021-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Numerical Algebra Control and Optimization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3934/naco.2022016\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Numerical Algebra Control and Optimization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3934/naco.2022016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 4

摘要

对于任何给定的神经网络结构,权重和偏置的排列都会得到相同的功能网络。这意味着用于“训练”或“学习”网络的优化算法在参数空间中面临着大量的等效最优解(即使是小型网络也有数百万个)。据我们所知,这一观察在文献中是缺失的。为了缩小参数搜索空间,引入了一种新的技术,使偏置向量的配置固定为单调递增。这是通过增加一个典型的学习问题来实现的,每个层的偏差向量上都有不等式约束。提出了一种基于Moreau-Yosida正则化的不等式约束处理算法,并证明了该算法的理论收敛性。将所提出的方法应用于标准三角函数和化学反应流中出现的更具挑战性的刚性常微分方程,清楚地说明了所提出方法的优点。该方法在TensorFlow中的MNIST数据集上的进一步应用表明,所提出的方法可以合并到任何现有的机器学习库中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep neural nets with fixed bias configuration
For any given neural network architecture a permutation of weights and biases results in the same functional network. This implies that optimization algorithms used to 'train' or 'learn' the network are faced with a very large number (in the millions even for small networks) of equivalent optimal solutions in the parameter space. To the best of our knowledge, this observation is absent in the literature. In order to narrow down the parameter search space, a novel technique is introduced in order to fix the bias vector configurations to be monotonically increasing. This is achieved by augmenting a typical learning problem with inequality constraints on the bias vectors in each layer. A Moreau-Yosida regularization based algorithm is proposed to handle these inequality constraints and a theoretical convergence of this algorithm is established. Applications of the proposed approach to standard trigonometric functions and more challenging stiff ordinary differential equations arising in chemically reacting flows clearly illustrate the benefits of the proposed approach. Further application of the approach on the MNIST dataset within TensorFlow, illustrate that the presented approach can be incorporated in any of the existing machine learning libraries.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.10
自引率
0.00%
发文量
62
期刊介绍: Numerical Algebra, Control and Optimization (NACO) aims at publishing original papers on any non-trivial interplay between control and optimization, and numerical techniques for their underlying linear and nonlinear algebraic systems. Topics of interest to NACO include the following: original research in theory, algorithms and applications of optimization; numerical methods for linear and nonlinear algebraic systems arising in modelling, control and optimisation; and original theoretical and applied research and development in the control of systems including all facets of control theory and its applications. In the application areas, special interests are on artificial intelligence and data sciences. The journal also welcomes expository submissions on subjects of current relevance to readers of the journal. The publication of papers in NACO is free of charge.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信