{"title":"深度神经网络响应训练的简单理论","authors":"Kenichi Nakazato","doi":"arxiv-2405.04074","DOIUrl":null,"url":null,"abstract":"Deep neural networks give us a powerful method to model the training\ndataset's relationship between input and output. We can regard that as a\ncomplex adaptive system consisting of many artificial neurons that work as an\nadaptive memory as a whole. The network's behavior is training dynamics with a\nfeedback loop from the evaluation of the loss function. We already know the\ntraining response can be constant or shows power law-like aging in some ideal\nsituations. However, we still have gaps between those findings and other\ncomplex phenomena, like network fragility. To fill the gap, we introduce a very\nsimple network and analyze it. We show the training response consists of some\ndifferent factors based on training stages, activation functions, or training\nmethods. In addition, we show feature space reduction as an effect of\nstochastic training dynamics, which can result in network fragility. Finally,\nwe discuss some complex phenomena of deep networks.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"233 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A simple theory for training response of deep neural networks\",\"authors\":\"Kenichi Nakazato\",\"doi\":\"arxiv-2405.04074\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks give us a powerful method to model the training\\ndataset's relationship between input and output. We can regard that as a\\ncomplex adaptive system consisting of many artificial neurons that work as an\\nadaptive memory as a whole. The network's behavior is training dynamics with a\\nfeedback loop from the evaluation of the loss function. We already know the\\ntraining response can be constant or shows power law-like aging in some ideal\\nsituations. However, we still have gaps between those findings and other\\ncomplex phenomena, like network fragility. To fill the gap, we introduce a very\\nsimple network and analyze it. We show the training response consists of some\\ndifferent factors based on training stages, activation functions, or training\\nmethods. In addition, we show feature space reduction as an effect of\\nstochastic training dynamics, which can result in network fragility. Finally,\\nwe discuss some complex phenomena of deep networks.\",\"PeriodicalId\":501305,\"journal\":{\"name\":\"arXiv - PHYS - Adaptation and Self-Organizing Systems\",\"volume\":\"233 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Adaptation and Self-Organizing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2405.04074\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Adaptation and Self-Organizing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.04074","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A simple theory for training response of deep neural networks
Deep neural networks give us a powerful method to model the training
dataset's relationship between input and output. We can regard that as a
complex adaptive system consisting of many artificial neurons that work as an
adaptive memory as a whole. The network's behavior is training dynamics with a
feedback loop from the evaluation of the loss function. We already know the
training response can be constant or shows power law-like aging in some ideal
situations. However, we still have gaps between those findings and other
complex phenomena, like network fragility. To fill the gap, we introduce a very
simple network and analyze it. We show the training response consists of some
different factors based on training stages, activation functions, or training
methods. In addition, we show feature space reduction as an effect of
stochastic training dynamics, which can result in network fragility. Finally,
we discuss some complex phenomena of deep networks.