{"title":"对抗性深度神经网络中单纯形压缩的流行","authors":"Yang Cao, Yanbo Chen, Weiwei Liu","doi":"10.1073/pnas.2421593122","DOIUrl":null,"url":null,"abstract":"Neural collapse (NC) reveals that the last layer of the network can capture data representations, leading to similar outputs for examples within the same class, while outputs for examples from different classes form a simplex equiangular tight frame (ETF) structure. This phenomenon has garnered significant attention due to its implications on the intrinsic properties of neural networks. Interestingly, we observe a simplex compression phenomenon in NC, where the geometric size of the simplex ETF reduces under adversarial training, with the degree of compression increasing as the perturbation radius grows. We provide empirical evidence supporting the existence of simplex compression across a wide range of models and datasets. Furthermore, we establish a rigorous theoretical framework that explains our experimental observations, offering insights into NC under adversarial conditions.","PeriodicalId":20548,"journal":{"name":"Proceedings of the National Academy of Sciences of the United States of America","volume":"48 1","pages":""},"PeriodicalIF":9.4000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prevalence of simplex compression in adversarial deep neural networks\",\"authors\":\"Yang Cao, Yanbo Chen, Weiwei Liu\",\"doi\":\"10.1073/pnas.2421593122\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural collapse (NC) reveals that the last layer of the network can capture data representations, leading to similar outputs for examples within the same class, while outputs for examples from different classes form a simplex equiangular tight frame (ETF) structure. This phenomenon has garnered significant attention due to its implications on the intrinsic properties of neural networks. Interestingly, we observe a simplex compression phenomenon in NC, where the geometric size of the simplex ETF reduces under adversarial training, with the degree of compression increasing as the perturbation radius grows. We provide empirical evidence supporting the existence of simplex compression across a wide range of models and datasets. Furthermore, we establish a rigorous theoretical framework that explains our experimental observations, offering insights into NC under adversarial conditions.\",\"PeriodicalId\":20548,\"journal\":{\"name\":\"Proceedings of the National Academy of Sciences of the United States of America\",\"volume\":\"48 1\",\"pages\":\"\"},\"PeriodicalIF\":9.4000,\"publicationDate\":\"2025-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the National Academy of Sciences of the United States of America\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1073/pnas.2421593122\",\"RegionNum\":1,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the National Academy of Sciences of the United States of America","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1073/pnas.2421593122","RegionNum":1,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
Prevalence of simplex compression in adversarial deep neural networks
Neural collapse (NC) reveals that the last layer of the network can capture data representations, leading to similar outputs for examples within the same class, while outputs for examples from different classes form a simplex equiangular tight frame (ETF) structure. This phenomenon has garnered significant attention due to its implications on the intrinsic properties of neural networks. Interestingly, we observe a simplex compression phenomenon in NC, where the geometric size of the simplex ETF reduces under adversarial training, with the degree of compression increasing as the perturbation radius grows. We provide empirical evidence supporting the existence of simplex compression across a wide range of models and datasets. Furthermore, we establish a rigorous theoretical framework that explains our experimental observations, offering insights into NC under adversarial conditions.
期刊介绍:
The Proceedings of the National Academy of Sciences (PNAS), a peer-reviewed journal of the National Academy of Sciences (NAS), serves as an authoritative source for high-impact, original research across the biological, physical, and social sciences. With a global scope, the journal welcomes submissions from researchers worldwide, making it an inclusive platform for advancing scientific knowledge.