{"title":"目标识别网络目标驱动剪枝的全局方法","authors":"Mehmet Z. Akpolat, Abdullah Bülbül","doi":"10.1109/SIU55565.2022.9864720","DOIUrl":null,"url":null,"abstract":"Pruning methods for neural network models are important for devices with performance and storage problems. Recently, unlike traditional pruning methods, The Goal Driven Pruning method has been proposed. This approach, inspired by the attention mechanism in humans, is based on decreasing the sensitivity to the features of distractors in the environment. For this purpose, in this method, pruning is performed not only in the middle layers, but also in the output layers for the task irrelevant classes. In this study, we present Global Goal-driven Pruning, which, unlike Goal-driven Pruning, prunes by evaluating the model as a whole, instead of layer-based pruning. The effectiveness of the proposed model has been demonstrated by the tests.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Global Approach for Goal-Driven Pruning of Object Recognition Networks\",\"authors\":\"Mehmet Z. Akpolat, Abdullah Bülbül\",\"doi\":\"10.1109/SIU55565.2022.9864720\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Pruning methods for neural network models are important for devices with performance and storage problems. Recently, unlike traditional pruning methods, The Goal Driven Pruning method has been proposed. This approach, inspired by the attention mechanism in humans, is based on decreasing the sensitivity to the features of distractors in the environment. For this purpose, in this method, pruning is performed not only in the middle layers, but also in the output layers for the task irrelevant classes. In this study, we present Global Goal-driven Pruning, which, unlike Goal-driven Pruning, prunes by evaluating the model as a whole, instead of layer-based pruning. The effectiveness of the proposed model has been demonstrated by the tests.\",\"PeriodicalId\":115446,\"journal\":{\"name\":\"2022 30th Signal Processing and Communications Applications Conference (SIU)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 30th Signal Processing and Communications Applications Conference (SIU)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SIU55565.2022.9864720\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 30th Signal Processing and Communications Applications Conference (SIU)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIU55565.2022.9864720","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Global Approach for Goal-Driven Pruning of Object Recognition Networks
Pruning methods for neural network models are important for devices with performance and storage problems. Recently, unlike traditional pruning methods, The Goal Driven Pruning method has been proposed. This approach, inspired by the attention mechanism in humans, is based on decreasing the sensitivity to the features of distractors in the environment. For this purpose, in this method, pruning is performed not only in the middle layers, but also in the output layers for the task irrelevant classes. In this study, we present Global Goal-driven Pruning, which, unlike Goal-driven Pruning, prunes by evaluating the model as a whole, instead of layer-based pruning. The effectiveness of the proposed model has been demonstrated by the tests.