{"title":"Towards Stochasticity of Regularization in Deep Neural Networks","authors":"Ljubinka Sandjakoska, A. Bogdanova","doi":"10.1109/NEUREL.2018.8587027","DOIUrl":null,"url":null,"abstract":"The high capacity of deep neural networks, developed for complex data, evokes its proneness to overfitting. A lot of attention is paid on finding flexible solutions to this problem. To achieve flexibility, as a very challenging issue in improving the ability of generalization, deep networks have to deal with the stochastic effects of regularization. In this paper we propose a methodological framework for dealing with the stochasticity in regularized deep neural network. Basics of dropout as ensemble method for regularization are presented, followed by introducing new method for dropout regularization and its application in molecular dynamics simulations. Results from the simulation show that, the stochastic behavior cannot be avoided but we have to find way to deal with it. The proposed dropout method improves the state-of-the-art of applied deep neural networks on the benchmark dataset.","PeriodicalId":371831,"journal":{"name":"2018 14th Symposium on Neural Networks and Applications (NEUREL)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 14th Symposium on Neural Networks and Applications (NEUREL)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NEUREL.2018.8587027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The high capacity of deep neural networks, developed for complex data, evokes its proneness to overfitting. A lot of attention is paid on finding flexible solutions to this problem. To achieve flexibility, as a very challenging issue in improving the ability of generalization, deep networks have to deal with the stochastic effects of regularization. In this paper we propose a methodological framework for dealing with the stochasticity in regularized deep neural network. Basics of dropout as ensemble method for regularization are presented, followed by introducing new method for dropout regularization and its application in molecular dynamics simulations. Results from the simulation show that, the stochastic behavior cannot be avoided but we have to find way to deal with it. The proposed dropout method improves the state-of-the-art of applied deep neural networks on the benchmark dataset.