Network Fission Ensembles for low-cost self-ensembles

IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Hojung Lee, Jong-Seok Lee
{"title":"Network Fission Ensembles for low-cost self-ensembles","authors":"Hojung Lee,&nbsp;Jong-Seok Lee","doi":"10.1016/j.patrec.2025.01.032","DOIUrl":null,"url":null,"abstract":"<div><div>Recent ensemble learning methods for image classification have demonstrated improved accuracy with low extra cost. However, they still rely on multiple trained models for ensemble inference, which can become a significant burden as the model size grows. Moreover, their performance has been somewhat limited compared to Deep Ensembles, primarily due to the lower performance of individual ensemble members. In this paper, we propose a low-cost ensemble learning and inference method called Network Fission Ensembles (NFE), which transforms a conventional network into a multi-exit structure allowing predictions to be made at different stages and enabling ensemble learning. To achieve this, we group the weight parameters in the layers into several sets and create multiple auxiliary paths by combining each set to construct multi-exits. We call this process Network Fission. Since this process simply changes the existing network structure to have multiple exits (i.e., classification outputs) without using additional networks, there is no extra computational burden. Furthermore, we employ an ensemble knowledge distillation technique exploiting the losses of all exits to train the network, so that we can achieve high generalization performance despite the reduced network size of each path composed of pruned weights. With our simple yet effective method, we achieve an accuracy of 83.5% on CIFAR100 with Wide-ResNet28-10, surpassing the best existing ensemble method, Deep Ensembles, which achieves 83.0%, while only one-third of the computational complexity is required in our method. The code is available at <span><span>https://github.com/hjdw2/NFE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"190 ","pages":"Pages 22-28"},"PeriodicalIF":3.9000,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525000327","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Recent ensemble learning methods for image classification have demonstrated improved accuracy with low extra cost. However, they still rely on multiple trained models for ensemble inference, which can become a significant burden as the model size grows. Moreover, their performance has been somewhat limited compared to Deep Ensembles, primarily due to the lower performance of individual ensemble members. In this paper, we propose a low-cost ensemble learning and inference method called Network Fission Ensembles (NFE), which transforms a conventional network into a multi-exit structure allowing predictions to be made at different stages and enabling ensemble learning. To achieve this, we group the weight parameters in the layers into several sets and create multiple auxiliary paths by combining each set to construct multi-exits. We call this process Network Fission. Since this process simply changes the existing network structure to have multiple exits (i.e., classification outputs) without using additional networks, there is no extra computational burden. Furthermore, we employ an ensemble knowledge distillation technique exploiting the losses of all exits to train the network, so that we can achieve high generalization performance despite the reduced network size of each path composed of pruned weights. With our simple yet effective method, we achieve an accuracy of 83.5% on CIFAR100 with Wide-ResNet28-10, surpassing the best existing ensemble method, Deep Ensembles, which achieves 83.0%, while only one-third of the computational complexity is required in our method. The code is available at https://github.com/hjdw2/NFE.
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition Letters
Pattern Recognition Letters 工程技术-计算机:人工智能
CiteScore
12.40
自引率
5.90%
发文量
287
审稿时长
9.1 months
期刊介绍: Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition. Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信