How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model

IF 11.6 1区 物理与天体物理 Q1 PHYSICS, MULTIDISCIPLINARY
Francesco Cagnetta, Leonardo Petrini, Umberto M. Tomasini, Alessandro Favero, Matthieu Wyart
{"title":"How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model","authors":"Francesco Cagnetta, Leonardo Petrini, Umberto M. Tomasini, Alessandro Favero, Matthieu Wyart","doi":"10.1103/physrevx.14.031001","DOIUrl":null,"url":null,"abstract":"Deep learning algorithms demonstrate a surprising ability to learn high-dimensional tasks from limited examples. This is commonly attributed to the depth of neural networks, enabling them to build a hierarchy of abstract, low-dimensional data representations. However, how many training examples are required to learn such representations remains unknown. To quantitatively study this question, we introduce the random hierarchy model: a family of synthetic tasks inspired by the hierarchical structure of language and images. The model is a classification task where each class corresponds to a group of high-level features, chosen among several equivalent groups associated with the same class. In turn, each feature corresponds to a group of subfeatures chosen among several equivalent groups and so on, following a hierarchy of composition rules. We find that deep networks learn the task by developing internal representations invariant to exchanging equivalent groups. Moreover, the number of data required corresponds to the point where correlations between low-level features and classes become detectable. Overall, our results indicate how deep networks overcome the curse of dimensionality by building invariant representations and provide an estimate of the number of data required to learn a hierarchical task.","PeriodicalId":20161,"journal":{"name":"Physical Review X","volume":null,"pages":null},"PeriodicalIF":11.6000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physical Review X","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1103/physrevx.14.031001","RegionNum":1,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning algorithms demonstrate a surprising ability to learn high-dimensional tasks from limited examples. This is commonly attributed to the depth of neural networks, enabling them to build a hierarchy of abstract, low-dimensional data representations. However, how many training examples are required to learn such representations remains unknown. To quantitatively study this question, we introduce the random hierarchy model: a family of synthetic tasks inspired by the hierarchical structure of language and images. The model is a classification task where each class corresponds to a group of high-level features, chosen among several equivalent groups associated with the same class. In turn, each feature corresponds to a group of subfeatures chosen among several equivalent groups and so on, following a hierarchy of composition rules. We find that deep networks learn the task by developing internal representations invariant to exchanging equivalent groups. Moreover, the number of data required corresponds to the point where correlations between low-level features and classes become detectable. Overall, our results indicate how deep networks overcome the curse of dimensionality by building invariant representations and provide an estimate of the number of data required to learn a hierarchical task.

Abstract Image

深度神经网络如何学习合成数据:随机层次模型
深度学习算法在从有限的示例中学习高维任务方面表现出了惊人的能力。这通常归功于神经网络的深度,使其能够建立抽象的低维数据表示层次。然而,需要多少训练示例才能学习到这种表征仍然是个未知数。为了对这一问题进行定量研究,我们引入了随机层次模型:这是受语言和图像的层次结构启发而产生的一系列合成任务。该模型是一种分类任务,其中每个类别对应一组高级特征,这些特征是从与同一类别相关的多个等价组中选择的。反过来,每个特征又对应一组子特征,这组子特征从多个等价组中选择,依此类推,遵循层次结构的组成规则。我们发现,深度网络通过开发不随等效组交换而变化的内部表征来学习任务。此外,所需的数据数量与低级特征和类别之间的相关性变得可检测的点相对应。总之,我们的研究结果表明了深度网络是如何通过建立不变表征来克服维度诅咒的,并提供了学习分层任务所需数据数量的估计值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Physical Review X
Physical Review X PHYSICS, MULTIDISCIPLINARY-
CiteScore
24.60
自引率
1.60%
发文量
197
审稿时长
3 months
期刊介绍: Physical Review X (PRX) stands as an exclusively online, fully open-access journal, emphasizing innovation, quality, and enduring impact in the scientific content it disseminates. Devoted to showcasing a curated selection of papers from pure, applied, and interdisciplinary physics, PRX aims to feature work with the potential to shape current and future research while leaving a lasting and profound impact in their respective fields. Encompassing the entire spectrum of physics subject areas, PRX places a special focus on groundbreaking interdisciplinary research with broad-reaching influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信