{"title":"衡量神经架构训练效率的框架","authors":"Eduardo Cueto-Mendoza, John D. Kelleher","doi":"arxiv-2409.07925","DOIUrl":null,"url":null,"abstract":"Measuring Efficiency in neural network system development is an open research\nproblem. This paper presents an experimental framework to measure the training\nefficiency of a neural architecture. To demonstrate our approach, we analyze\nthe training efficiency of Convolutional Neural Networks and Bayesian\nequivalents on the MNIST and CIFAR-10 tasks. Our results show that training\nefficiency decays as training progresses and varies across different stopping\ncriteria for a given neural model and learning task. We also find a non-linear\nrelationship between training stopping criteria, training Efficiency, model\nsize, and training Efficiency. Furthermore, we illustrate the potential confounding effects of overtraining\non measuring the training efficiency of a neural architecture. Regarding\nrelative training efficiency across different architectures, our results\nindicate that CNNs are more efficient than BCNNs on both datasets. More\ngenerally, as a learning task becomes more complex, the relative difference in\ntraining efficiency between different architectures becomes more pronounced.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A framework for measuring the training efficiency of a neural architecture\",\"authors\":\"Eduardo Cueto-Mendoza, John D. Kelleher\",\"doi\":\"arxiv-2409.07925\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Measuring Efficiency in neural network system development is an open research\\nproblem. This paper presents an experimental framework to measure the training\\nefficiency of a neural architecture. To demonstrate our approach, we analyze\\nthe training efficiency of Convolutional Neural Networks and Bayesian\\nequivalents on the MNIST and CIFAR-10 tasks. Our results show that training\\nefficiency decays as training progresses and varies across different stopping\\ncriteria for a given neural model and learning task. We also find a non-linear\\nrelationship between training stopping criteria, training Efficiency, model\\nsize, and training Efficiency. Furthermore, we illustrate the potential confounding effects of overtraining\\non measuring the training efficiency of a neural architecture. Regarding\\nrelative training efficiency across different architectures, our results\\nindicate that CNNs are more efficient than BCNNs on both datasets. More\\ngenerally, as a learning task becomes more complex, the relative difference in\\ntraining efficiency between different architectures becomes more pronounced.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07925\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07925","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A framework for measuring the training efficiency of a neural architecture
Measuring Efficiency in neural network system development is an open research
problem. This paper presents an experimental framework to measure the training
efficiency of a neural architecture. To demonstrate our approach, we analyze
the training efficiency of Convolutional Neural Networks and Bayesian
equivalents on the MNIST and CIFAR-10 tasks. Our results show that training
efficiency decays as training progresses and varies across different stopping
criteria for a given neural model and learning task. We also find a non-linear
relationship between training stopping criteria, training Efficiency, model
size, and training Efficiency. Furthermore, we illustrate the potential confounding effects of overtraining
on measuring the training efficiency of a neural architecture. Regarding
relative training efficiency across different architectures, our results
indicate that CNNs are more efficient than BCNNs on both datasets. More
generally, as a learning task becomes more complex, the relative difference in
training efficiency between different architectures becomes more pronounced.