Dejiao Zhang, Julian Katz-Samuels, Mário A. T. Figueiredo, L. Balzano
{"title":"Simultaneous Sparsity and Parameter Tying for Deep Learning Using Ordered Weighted ℓ1 Regularization","authors":"Dejiao Zhang, Julian Katz-Samuels, Mário A. T. Figueiredo, L. Balzano","doi":"10.1109/SSP.2018.8450819","DOIUrl":null,"url":null,"abstract":"A deep neural network (DNN) usually contains millions of parameters, making both storage and computation extremely expensive. Although this high capacity allows DNNs to learn sophisticated mappings, it also makes them prone to over-fitting. To tackle this issue, we adopt a recently proposed sparsity-inducing regularizer called OWL (ordered weighted ℓ1, which has proven effective in sparse linear regression with strongly correlated covariates. Unlike the conventional sparsity-inducing regularizers, OWL simultaneously eliminates unimportant variables by setting their weights to zero, while also explicitly identifying correlated groups of variables by tying the corresponding weights to a common value. We evaluate the OWL regularizer on several deep learning benchmarks, showing that it can dramatically compress the network with slight or even no loss on generalization accuracy.","PeriodicalId":330528,"journal":{"name":"2018 IEEE Statistical Signal Processing Workshop (SSP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Statistical Signal Processing Workshop (SSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSP.2018.8450819","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
A deep neural network (DNN) usually contains millions of parameters, making both storage and computation extremely expensive. Although this high capacity allows DNNs to learn sophisticated mappings, it also makes them prone to over-fitting. To tackle this issue, we adopt a recently proposed sparsity-inducing regularizer called OWL (ordered weighted ℓ1, which has proven effective in sparse linear regression with strongly correlated covariates. Unlike the conventional sparsity-inducing regularizers, OWL simultaneously eliminates unimportant variables by setting their weights to zero, while also explicitly identifying correlated groups of variables by tying the corresponding weights to a common value. We evaluate the OWL regularizer on several deep learning benchmarks, showing that it can dramatically compress the network with slight or even no loss on generalization accuracy.