{"title":"通过转移教师和强化引导训练课程提高压缩卷积神经网络的准确性","authors":"Anusha Jayasimhan, Pabitha P.","doi":"10.1016/j.knosys.2024.112719","DOIUrl":null,"url":null,"abstract":"<div><div>Model compression techniques, such as network pruning, quantization and knowledge distillation, are essential for deploying large Convolutional Neural Networks (CNNs) on resource-constrained devices. Nevertheless, these techniques frequently lead to an accuracy loss, which affects performance in applications where precision is crucial. To mitigate accuracy loss, a novel method integrating Curriculum Learning (CL) with model compression, is proposed. Curriculum learning is a training approach in machine learning that involves progressively training a model on increasingly difficult samples. Existing CL approaches primarily rely on the manual design of scoring the difficulty of samples as well as pacing the easy to difficult examples for training. This gives rise to limitations such as inflexibility, need for expert domain knowledge and a decline in performance. Thereby, we propose a novel curriculum learning approach TRACE-CNN, i.e <strong><u>T</u></strong>ransfer-teacher and <strong><u>R</u></strong>einforcement-guided <strong><u>A</u></strong>daptive <strong><u>C</u></strong>urriculum for <strong><u>E</u></strong>nhancing <strong><u>C</u></strong>onvolutional <strong><u>N</u></strong>eural <strong><u>N</u></strong>etworks, to address these limitations. Our semi-automated CL method consists of a pre-trained transfer teacher model whose performance serves as a measure of difficulty for the training examples. Furthermore, we employ a reinforcement learning technique to schedule training according to sample difficulty rather than establishing a fixed scheduler. Experiments on two benchmark datasets demonstrate that our method, when integrated into a model compression pipeline, effectively reduces the accuracy loss usually associated with such compression techniques.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"306 ","pages":"Article 112719"},"PeriodicalIF":7.2000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing accuracy of compressed Convolutional Neural Networks through a transfer teacher and reinforcement guided training curriculum\",\"authors\":\"Anusha Jayasimhan, Pabitha P.\",\"doi\":\"10.1016/j.knosys.2024.112719\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Model compression techniques, such as network pruning, quantization and knowledge distillation, are essential for deploying large Convolutional Neural Networks (CNNs) on resource-constrained devices. Nevertheless, these techniques frequently lead to an accuracy loss, which affects performance in applications where precision is crucial. To mitigate accuracy loss, a novel method integrating Curriculum Learning (CL) with model compression, is proposed. Curriculum learning is a training approach in machine learning that involves progressively training a model on increasingly difficult samples. Existing CL approaches primarily rely on the manual design of scoring the difficulty of samples as well as pacing the easy to difficult examples for training. This gives rise to limitations such as inflexibility, need for expert domain knowledge and a decline in performance. Thereby, we propose a novel curriculum learning approach TRACE-CNN, i.e <strong><u>T</u></strong>ransfer-teacher and <strong><u>R</u></strong>einforcement-guided <strong><u>A</u></strong>daptive <strong><u>C</u></strong>urriculum for <strong><u>E</u></strong>nhancing <strong><u>C</u></strong>onvolutional <strong><u>N</u></strong>eural <strong><u>N</u></strong>etworks, to address these limitations. Our semi-automated CL method consists of a pre-trained transfer teacher model whose performance serves as a measure of difficulty for the training examples. Furthermore, we employ a reinforcement learning technique to schedule training according to sample difficulty rather than establishing a fixed scheduler. Experiments on two benchmark datasets demonstrate that our method, when integrated into a model compression pipeline, effectively reduces the accuracy loss usually associated with such compression techniques.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"306 \",\"pages\":\"Article 112719\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2024-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705124013534\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705124013534","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Enhancing accuracy of compressed Convolutional Neural Networks through a transfer teacher and reinforcement guided training curriculum
Model compression techniques, such as network pruning, quantization and knowledge distillation, are essential for deploying large Convolutional Neural Networks (CNNs) on resource-constrained devices. Nevertheless, these techniques frequently lead to an accuracy loss, which affects performance in applications where precision is crucial. To mitigate accuracy loss, a novel method integrating Curriculum Learning (CL) with model compression, is proposed. Curriculum learning is a training approach in machine learning that involves progressively training a model on increasingly difficult samples. Existing CL approaches primarily rely on the manual design of scoring the difficulty of samples as well as pacing the easy to difficult examples for training. This gives rise to limitations such as inflexibility, need for expert domain knowledge and a decline in performance. Thereby, we propose a novel curriculum learning approach TRACE-CNN, i.e Transfer-teacher and Reinforcement-guided Adaptive Curriculum for Enhancing Convolutional Neural Networks, to address these limitations. Our semi-automated CL method consists of a pre-trained transfer teacher model whose performance serves as a measure of difficulty for the training examples. Furthermore, we employ a reinforcement learning technique to schedule training according to sample difficulty rather than establishing a fixed scheduler. Experiments on two benchmark datasets demonstrate that our method, when integrated into a model compression pipeline, effectively reduces the accuracy loss usually associated with such compression techniques.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.