Improved Classification and Reconstruction by Introducing Independence and Randomization in Deep Neural Networks

G. Hiranandani, H. Karnick
{"title":"Improved Classification and Reconstruction by Introducing Independence and Randomization in Deep Neural Networks","authors":"G. Hiranandani, H. Karnick","doi":"10.1109/DICTA.2015.7371270","DOIUrl":null,"url":null,"abstract":"This paper deals with a novel way of improving classification as well as reconstructions obtained from deep neural networks. The underlying ideas that have been used throughout are Independence and Randomization. The idea is to expose the inherent properties of neural network architectures and to make simpler models that are easy to implement rather than creating highly fine-tuned and complex neural network architectures. For the most basic type of deep neural network i.e. fully connected, it has been shown that dividing the data into independent components and training each component separately not only reduces the parameters to be learned but also the training is more efficient. And if the predictions are fused appropriately the overall accuracy also increases. Using the orthogonality of LAB colour space, it is shown that L,A and B components trained separately produce better reconstructions than RGB components taken together which in turn produce better reconstructions than LAB components taken together. Based on a similar approach, randomization has been injected into the networks so as to make different networks as independent as possible. Again fusing predictions appropriately increases accuracy. The best error on MNIST's test data set was 1.91% which is a drop by 1.05% in comparison to architectures that we created similar to [1]. As the technique is architecture independent it can be applied to other networks - for example CNNs or RNNs.","PeriodicalId":214897,"journal":{"name":"2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2015.7371270","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper deals with a novel way of improving classification as well as reconstructions obtained from deep neural networks. The underlying ideas that have been used throughout are Independence and Randomization. The idea is to expose the inherent properties of neural network architectures and to make simpler models that are easy to implement rather than creating highly fine-tuned and complex neural network architectures. For the most basic type of deep neural network i.e. fully connected, it has been shown that dividing the data into independent components and training each component separately not only reduces the parameters to be learned but also the training is more efficient. And if the predictions are fused appropriately the overall accuracy also increases. Using the orthogonality of LAB colour space, it is shown that L,A and B components trained separately produce better reconstructions than RGB components taken together which in turn produce better reconstructions than LAB components taken together. Based on a similar approach, randomization has been injected into the networks so as to make different networks as independent as possible. Again fusing predictions appropriately increases accuracy. The best error on MNIST's test data set was 1.91% which is a drop by 1.05% in comparison to architectures that we created similar to [1]. As the technique is architecture independent it can be applied to other networks - for example CNNs or RNNs.
在深度神经网络中引入独立性和随机化改进分类和重构
本文讨论了一种改进深度神经网络分类和重建的新方法。贯穿始终的基本理念是独立性和随机性。其想法是暴露神经网络架构的固有属性,并创建易于实现的更简单的模型,而不是创建高度微调和复杂的神经网络架构。对于最基本的深度神经网络,即完全连接的深度神经网络,研究表明,将数据分成独立的分量,分别训练每个分量,不仅减少了需要学习的参数,而且训练效率更高。如果这些预测被恰当地融合在一起,整体的准确性也会提高。利用LAB色彩空间的正交性,结果表明,分别训练的L、A和B分量比组合在一起的RGB分量产生更好的重建,而RGB分量又比组合在一起的LAB分量产生更好的重建。基于类似的方法,在网络中注入随机化,使不同的网络尽可能独立。同样,适当地融合预测可以提高准确性。MNIST的测试数据集上的最佳误差为1.91%,与我们创建的类似[1]的架构相比下降了1.05%。由于该技术与体系结构无关,因此可以应用于其他网络,例如cnn或rnn。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信