使用分区数据向量的深度学习

B. Mitchell, H. Tosun, John W. Sheppard
{"title":"使用分区数据向量的深度学习","authors":"B. Mitchell, H. Tosun, John W. Sheppard","doi":"10.1109/IJCNN.2015.7280484","DOIUrl":null,"url":null,"abstract":"Deep learning is a popular field that encompasses a range of multi-layer connectionist techniques. While these techniques have achieved great success on a number of difficult computer vision problems, the representation biases that allow this success have not been thoroughly explored. In this paper, we examine the hypothesis that one strength of many deep learning algorithms is their ability to exploit spatially local statistical information. We present a formal description of how data vectors can be partitioned into sub-vectors that preserve spatially local information. As a test case, we then use statistical models to examine how much of such structure exists in the MNIST dataset. Finally, we present experimental results from training RBMs using partitioned data, and demonstrate the advantages they have over non-partitioned RBMs. Through these results, we show how the performance advantage is reliant on spatially local structure, by demonstrating the performance impact of randomly permuting the input data to destroy local structure. Overall, our results support the hypothesis that a representation bias reliant upon spatially local statistical information can improve performance, so long as this bias is a good match for the data. We also suggest statistical tools for determining a priori whether a dataset is a good match for this bias or not.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"5 1","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Deep learning using partitioned data vectors\",\"authors\":\"B. Mitchell, H. Tosun, John W. Sheppard\",\"doi\":\"10.1109/IJCNN.2015.7280484\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning is a popular field that encompasses a range of multi-layer connectionist techniques. While these techniques have achieved great success on a number of difficult computer vision problems, the representation biases that allow this success have not been thoroughly explored. In this paper, we examine the hypothesis that one strength of many deep learning algorithms is their ability to exploit spatially local statistical information. We present a formal description of how data vectors can be partitioned into sub-vectors that preserve spatially local information. As a test case, we then use statistical models to examine how much of such structure exists in the MNIST dataset. Finally, we present experimental results from training RBMs using partitioned data, and demonstrate the advantages they have over non-partitioned RBMs. Through these results, we show how the performance advantage is reliant on spatially local structure, by demonstrating the performance impact of randomly permuting the input data to destroy local structure. Overall, our results support the hypothesis that a representation bias reliant upon spatially local statistical information can improve performance, so long as this bias is a good match for the data. We also suggest statistical tools for determining a priori whether a dataset is a good match for this bias or not.\",\"PeriodicalId\":6539,\"journal\":{\"name\":\"2015 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"5 1\",\"pages\":\"1-8\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2015.7280484\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2015.7280484","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

深度学习是一个流行的领域,它包含了一系列多层次的连接论技术。虽然这些技术在许多困难的计算机视觉问题上取得了巨大的成功,但允许这一成功的表示偏差尚未得到彻底的探索。在本文中,我们检验了一个假设,即许多深度学习算法的一个优势是它们利用空间局部统计信息的能力。我们提出了一个关于如何将数据向量划分为保留空间局部信息的子向量的正式描述。作为一个测试用例,我们然后使用统计模型来检查MNIST数据集中存在多少这样的结构。最后,我们给出了使用分区数据训练rbm的实验结果,并展示了它们相对于未分区rbm的优势。通过这些结果,我们通过展示随机排列输入数据以破坏局部结构对性能的影响,展示了性能优势如何依赖于空间局部结构。总的来说,我们的结果支持这样一个假设,即依赖于空间局部统计信息的表示偏差可以提高性能,只要这种偏差与数据很好地匹配。我们还建议使用统计工具来先验地确定数据集是否与这种偏差相匹配。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep learning using partitioned data vectors
Deep learning is a popular field that encompasses a range of multi-layer connectionist techniques. While these techniques have achieved great success on a number of difficult computer vision problems, the representation biases that allow this success have not been thoroughly explored. In this paper, we examine the hypothesis that one strength of many deep learning algorithms is their ability to exploit spatially local statistical information. We present a formal description of how data vectors can be partitioned into sub-vectors that preserve spatially local information. As a test case, we then use statistical models to examine how much of such structure exists in the MNIST dataset. Finally, we present experimental results from training RBMs using partitioned data, and demonstrate the advantages they have over non-partitioned RBMs. Through these results, we show how the performance advantage is reliant on spatially local structure, by demonstrating the performance impact of randomly permuting the input data to destroy local structure. Overall, our results support the hypothesis that a representation bias reliant upon spatially local statistical information can improve performance, so long as this bias is a good match for the data. We also suggest statistical tools for determining a priori whether a dataset is a good match for this bias or not.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信