基于l2,0范数正则化的单隐层神经网络隐特征选择方法

Zhiwei Liu, Yuanlong Yu, Zhenzhen Sun
{"title":"基于l2,0范数正则化的单隐层神经网络隐特征选择方法","authors":"Zhiwei Liu, Yuanlong Yu, Zhenzhen Sun","doi":"10.1109/SSCI44817.2019.9002808","DOIUrl":null,"url":null,"abstract":"Feature selection is an important data preprocessing for machine learning. It can improve the performance of machine learning algorithms by removing redundant and noisy features. Among all the methods, those based on l1-norms or l2,1-norms have received considerable attention due to their good performance. However, these methods cannot produce exact row sparsity to the weight matrix, so the number of selected features cannot be determined automatically without using a threshold. To this end, this paper proposes a feature selection method incorporating the l2,0-norm, which can guarantee exact row sparsity of weight matrix. A method based on iterative hard thresholding (IHT) algorithm is also proposed to solve the l2,0- norm regularized least square problem. For fully using the role of row-sparsity induced by the l2,0-norm, this method acts as network pruning for single-hidden-layer neural networks. This method is conducted on the hidden features and it can achieve node-level pruning rather than the connection-level pruning. The experimental results in several public data sets and three image recognition data sets have shown that this method can not only effectively prune the useless hidden nodes, but also obtain better performance.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"50 1","pages":"1810-1817"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Hidden Feature Selection Method based on l2,0-Norm Regularization for Training Single-hidden-layer Neural Networks\",\"authors\":\"Zhiwei Liu, Yuanlong Yu, Zhenzhen Sun\",\"doi\":\"10.1109/SSCI44817.2019.9002808\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Feature selection is an important data preprocessing for machine learning. It can improve the performance of machine learning algorithms by removing redundant and noisy features. Among all the methods, those based on l1-norms or l2,1-norms have received considerable attention due to their good performance. However, these methods cannot produce exact row sparsity to the weight matrix, so the number of selected features cannot be determined automatically without using a threshold. To this end, this paper proposes a feature selection method incorporating the l2,0-norm, which can guarantee exact row sparsity of weight matrix. A method based on iterative hard thresholding (IHT) algorithm is also proposed to solve the l2,0- norm regularized least square problem. For fully using the role of row-sparsity induced by the l2,0-norm, this method acts as network pruning for single-hidden-layer neural networks. This method is conducted on the hidden features and it can achieve node-level pruning rather than the connection-level pruning. The experimental results in several public data sets and three image recognition data sets have shown that this method can not only effectively prune the useless hidden nodes, but also obtain better performance.\",\"PeriodicalId\":6729,\"journal\":{\"name\":\"2019 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"volume\":\"50 1\",\"pages\":\"1810-1817\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSCI44817.2019.9002808\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI44817.2019.9002808","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

特征选择是机器学习中重要的数据预处理。它可以通过去除冗余和噪声特征来提高机器学习算法的性能。在所有的方法中,基于1.1范数的方法和基于l2,1范数的方法由于其良好的性能受到了相当多的关注。然而,这些方法不能对权重矩阵产生精确的行稀疏性,因此在不使用阈值的情况下无法自动确定所选特征的数量。为此,本文提出了一种包含l2,0范数的特征选择方法,该方法可以保证权矩阵的行稀疏性。提出了一种基于迭代硬阈值(IHT)算法求解l2,0范数正则化最小二乘问题的方法。为了充分利用l2,0范数诱导的行稀疏性的作用,该方法作为单隐层神经网络的网络剪枝。该方法对隐藏特征进行处理,可以实现节点级剪枝而不是连接级剪枝。在几个公开数据集和三个图像识别数据集上的实验结果表明,该方法不仅可以有效地修剪无用的隐藏节点,而且可以获得更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Hidden Feature Selection Method based on l2,0-Norm Regularization for Training Single-hidden-layer Neural Networks
Feature selection is an important data preprocessing for machine learning. It can improve the performance of machine learning algorithms by removing redundant and noisy features. Among all the methods, those based on l1-norms or l2,1-norms have received considerable attention due to their good performance. However, these methods cannot produce exact row sparsity to the weight matrix, so the number of selected features cannot be determined automatically without using a threshold. To this end, this paper proposes a feature selection method incorporating the l2,0-norm, which can guarantee exact row sparsity of weight matrix. A method based on iterative hard thresholding (IHT) algorithm is also proposed to solve the l2,0- norm regularized least square problem. For fully using the role of row-sparsity induced by the l2,0-norm, this method acts as network pruning for single-hidden-layer neural networks. This method is conducted on the hidden features and it can achieve node-level pruning rather than the connection-level pruning. The experimental results in several public data sets and three image recognition data sets have shown that this method can not only effectively prune the useless hidden nodes, but also obtain better performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信