Pruning convolutional neural networks for inductive conformal prediction

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xindi Zhao , Amin Farjudian , Anthony Bellotti
{"title":"Pruning convolutional neural networks for inductive conformal prediction","authors":"Xindi Zhao ,&nbsp;Amin Farjudian ,&nbsp;Anthony Bellotti","doi":"10.1016/j.neucom.2024.128704","DOIUrl":null,"url":null,"abstract":"<div><div>Neural network pruning is a popular approach to reducing model storage size and inference time by removing redundant parameters in the neural network. However, the uncertainty of predictions from pruned models is unexplored. In this paper we study neural network pruning in the context of conformal predictors (CP). The conformal prediction framework built on top of machine learning algorithms supplements their predictions with reliable uncertainty measure in the form of prediction sets, under the independent and identically distributed assumption on the data. Convolutional neural networks (CNNs) have complicated architectures and are widely used in various applications nowadays. Therefore, we focus on pruning CNNs and, in particular, filter-level pruning. We first propose a brute force method that estimates the contribution of a filter to the CP’s predictive efficiency and removes those with the least contribution. Given the computation inefficiency of the brute force method, we also propose the Taylor expansion to approximate the filter’s contribution. Furthermore, we improve the global pruning method by protecting the most important filters within each layer from being pruned. In addition, we explore the ConfTr loss function which is optimized to yield maximal CP efficiency in the context of neural network pruning. We have conducted extensive experimental studies and compared the results regarding the trade-offs between predictive efficiency, computational efficiency, and network sparsity. These results are instructive for deploying pruned neural networks with applications using conformal prediction where reliable predictions and reduced computational cost are relevant, such as in safety-critical applications.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224014759","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Neural network pruning is a popular approach to reducing model storage size and inference time by removing redundant parameters in the neural network. However, the uncertainty of predictions from pruned models is unexplored. In this paper we study neural network pruning in the context of conformal predictors (CP). The conformal prediction framework built on top of machine learning algorithms supplements their predictions with reliable uncertainty measure in the form of prediction sets, under the independent and identically distributed assumption on the data. Convolutional neural networks (CNNs) have complicated architectures and are widely used in various applications nowadays. Therefore, we focus on pruning CNNs and, in particular, filter-level pruning. We first propose a brute force method that estimates the contribution of a filter to the CP’s predictive efficiency and removes those with the least contribution. Given the computation inefficiency of the brute force method, we also propose the Taylor expansion to approximate the filter’s contribution. Furthermore, we improve the global pruning method by protecting the most important filters within each layer from being pruned. In addition, we explore the ConfTr loss function which is optimized to yield maximal CP efficiency in the context of neural network pruning. We have conducted extensive experimental studies and compared the results regarding the trade-offs between predictive efficiency, computational efficiency, and network sparsity. These results are instructive for deploying pruned neural networks with applications using conformal prediction where reliable predictions and reduced computational cost are relevant, such as in safety-critical applications.
剪枝卷积神经网络用于归纳保形预测
神经网络剪枝是一种流行的方法,通过去除神经网络中的冗余参数来减少模型存储空间和推理时间。然而,对剪枝模型预测的不确定性还没有进行研究。在本文中,我们以保形预测器(CP)为背景研究神经网络剪枝。共形预测框架建立在机器学习算法之上,在数据独立且同分布的假设下,以预测集的形式对其预测进行可靠的不确定性测量。卷积神经网络(CNN)具有复杂的架构,如今已广泛应用于各种领域。因此,我们专注于剪枝 CNN,尤其是滤波器级剪枝。我们首先提出了一种蛮力方法,用于估算滤波器对 CP 预测效率的贡献,并去除贡献最小的滤波器。鉴于蛮力法的计算效率较低,我们还提出了泰勒扩展法来近似估计滤波器的贡献。此外,我们还改进了全局剪枝方法,保护每一层中最重要的滤波器不被剪枝。此外,我们还探索了 ConfTr 损失函数,该函数经过优化,能在神经网络剪枝中产生最大的 CP 效率。我们进行了广泛的实验研究,并比较了预测效率、计算效率和网络稀疏性之间的权衡结果。这些结果对于在使用保形预测的应用中部署剪枝神经网络具有指导意义,因为在这些应用中,可靠的预测和降低的计算成本至关重要,例如在安全关键型应用中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信