Antti Klemetti;Mikko Raatikainen;Juhani Kivimäki;Lalli Myllyaho;Jukka K. Nurminen
{"title":"从使用表格数据训练的深度神经网络中移除神经元","authors":"Antti Klemetti;Mikko Raatikainen;Juhani Kivimäki;Lalli Myllyaho;Jukka K. Nurminen","doi":"10.1109/OJCS.2024.3467182","DOIUrl":null,"url":null,"abstract":"Deep neural networks bear substantial cloud computational loads and often surpass client devices' capabilities. Research has concentrated on reducing the inference burden of convolutional neural networks processing images. Unstructured pruning, which leads to sparse matrices requiring specialized hardware, has been extensively studied. However, neural networks trained with tabular data and structured pruning, which produces dense matrices handled by standard hardware, are less explored. We compare two approaches: 1) Removing neurons followed by training from scratch, and 2) Structured pruning followed by fine-tuning through additional training over a limited number of epochs. We evaluate these approaches using three models of varying sizes (1.5, 9.2, and 118.7 million parameters) from Kaggle-winning neural networks trained with tabular data. Approach 1 consistently outperformed Approach 2 in predictive performance. The models from Approach 1 had 52%, 8%, and 12% fewer parameters than the original models, with latency reductions of 18%, 5%, and 5%, respectively. Approach 2 required at least one epoch of fine-tuning for recovering predictive performance, with further fine-tuning offering diminishing returns. Approach 1 yields lighter models for retraining in the presence of concept drift and avoids shifting computational load from inference to training, which is inherent in Approach 2. However, Approach 2 can be used to pinpoint the layers that have the least impact on the model's predictive performance when neurons are removed. We found that the feed-forward component of the transformer architecture used in large language models is a promising target for neuron removal.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"5 ","pages":"542-552"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10693557","citationCount":"0","resultStr":"{\"title\":\"Removing Neurons From Deep Neural Networks Trained With Tabular Data\",\"authors\":\"Antti Klemetti;Mikko Raatikainen;Juhani Kivimäki;Lalli Myllyaho;Jukka K. Nurminen\",\"doi\":\"10.1109/OJCS.2024.3467182\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks bear substantial cloud computational loads and often surpass client devices' capabilities. Research has concentrated on reducing the inference burden of convolutional neural networks processing images. Unstructured pruning, which leads to sparse matrices requiring specialized hardware, has been extensively studied. However, neural networks trained with tabular data and structured pruning, which produces dense matrices handled by standard hardware, are less explored. We compare two approaches: 1) Removing neurons followed by training from scratch, and 2) Structured pruning followed by fine-tuning through additional training over a limited number of epochs. We evaluate these approaches using three models of varying sizes (1.5, 9.2, and 118.7 million parameters) from Kaggle-winning neural networks trained with tabular data. Approach 1 consistently outperformed Approach 2 in predictive performance. The models from Approach 1 had 52%, 8%, and 12% fewer parameters than the original models, with latency reductions of 18%, 5%, and 5%, respectively. Approach 2 required at least one epoch of fine-tuning for recovering predictive performance, with further fine-tuning offering diminishing returns. Approach 1 yields lighter models for retraining in the presence of concept drift and avoids shifting computational load from inference to training, which is inherent in Approach 2. However, Approach 2 can be used to pinpoint the layers that have the least impact on the model's predictive performance when neurons are removed. We found that the feed-forward component of the transformer architecture used in large language models is a promising target for neuron removal.\",\"PeriodicalId\":13205,\"journal\":{\"name\":\"IEEE Open Journal of the Computer Society\",\"volume\":\"5 \",\"pages\":\"542-552\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10693557\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of the Computer Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10693557/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10693557/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Removing Neurons From Deep Neural Networks Trained With Tabular Data
Deep neural networks bear substantial cloud computational loads and often surpass client devices' capabilities. Research has concentrated on reducing the inference burden of convolutional neural networks processing images. Unstructured pruning, which leads to sparse matrices requiring specialized hardware, has been extensively studied. However, neural networks trained with tabular data and structured pruning, which produces dense matrices handled by standard hardware, are less explored. We compare two approaches: 1) Removing neurons followed by training from scratch, and 2) Structured pruning followed by fine-tuning through additional training over a limited number of epochs. We evaluate these approaches using three models of varying sizes (1.5, 9.2, and 118.7 million parameters) from Kaggle-winning neural networks trained with tabular data. Approach 1 consistently outperformed Approach 2 in predictive performance. The models from Approach 1 had 52%, 8%, and 12% fewer parameters than the original models, with latency reductions of 18%, 5%, and 5%, respectively. Approach 2 required at least one epoch of fine-tuning for recovering predictive performance, with further fine-tuning offering diminishing returns. Approach 1 yields lighter models for retraining in the presence of concept drift and avoids shifting computational load from inference to training, which is inherent in Approach 2. However, Approach 2 can be used to pinpoint the layers that have the least impact on the model's predictive performance when neurons are removed. We found that the feed-forward component of the transformer architecture used in large language models is a promising target for neuron removal.