{"title":"Improved Classification and Reconstruction by Introducing Independence and Randomization in Deep Neural Networks","authors":"G. Hiranandani, H. Karnick","doi":"10.1109/DICTA.2015.7371270","DOIUrl":null,"url":null,"abstract":"This paper deals with a novel way of improving classification as well as reconstructions obtained from deep neural networks. The underlying ideas that have been used throughout are Independence and Randomization. The idea is to expose the inherent properties of neural network architectures and to make simpler models that are easy to implement rather than creating highly fine-tuned and complex neural network architectures. For the most basic type of deep neural network i.e. fully connected, it has been shown that dividing the data into independent components and training each component separately not only reduces the parameters to be learned but also the training is more efficient. And if the predictions are fused appropriately the overall accuracy also increases. Using the orthogonality of LAB colour space, it is shown that L,A and B components trained separately produce better reconstructions than RGB components taken together which in turn produce better reconstructions than LAB components taken together. Based on a similar approach, randomization has been injected into the networks so as to make different networks as independent as possible. Again fusing predictions appropriately increases accuracy. The best error on MNIST's test data set was 1.91% which is a drop by 1.05% in comparison to architectures that we created similar to [1]. As the technique is architecture independent it can be applied to other networks - for example CNNs or RNNs.","PeriodicalId":214897,"journal":{"name":"2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2015.7371270","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper deals with a novel way of improving classification as well as reconstructions obtained from deep neural networks. The underlying ideas that have been used throughout are Independence and Randomization. The idea is to expose the inherent properties of neural network architectures and to make simpler models that are easy to implement rather than creating highly fine-tuned and complex neural network architectures. For the most basic type of deep neural network i.e. fully connected, it has been shown that dividing the data into independent components and training each component separately not only reduces the parameters to be learned but also the training is more efficient. And if the predictions are fused appropriately the overall accuracy also increases. Using the orthogonality of LAB colour space, it is shown that L,A and B components trained separately produce better reconstructions than RGB components taken together which in turn produce better reconstructions than LAB components taken together. Based on a similar approach, randomization has been injected into the networks so as to make different networks as independent as possible. Again fusing predictions appropriately increases accuracy. The best error on MNIST's test data set was 1.91% which is a drop by 1.05% in comparison to architectures that we created similar to [1]. As the technique is architecture independent it can be applied to other networks - for example CNNs or RNNs.