{"title":"lasonet模型在图像识别中的分类效率","authors":"Xingkai Wen, Zhiji Yang","doi":"10.1109/AEMCSE55572.2022.00083","DOIUrl":null,"url":null,"abstract":"LassoNet is a neural network framework proposed by Robert Tibshirani et al. and published in the \"Journal of Machine Learning Research\" in 2021. The model generalizes the existing Lasso regression and its feature sparsity to a feedforward neural network, and performs feature selection and parameter learning at the same time under the premise of unknown optimal number of selected features. In order to verify whether the classification efficiency of LassoNet is efficient, LassoNet is first compared with four shallow learning methods (logistic regression, Fisher linear discriminant, random forest and support vector machine) and three deep learning methods (CNN, Inception and Residual Module), respectively. For the classification of high-dimensional and large-sample datasets in five different fields, the experimental results show that LassoNet has a significant classification effect, which is significantly better than the general shallow learning method, and is comparable to the deep learning method. It can be seen that LassoNet has strong versatility and It is easy to use, but it takes a lot of time to run. In the follow-up work, the feedforward neural network can be optimized or replaced to further improve the classification efficiency.","PeriodicalId":309096,"journal":{"name":"2022 5th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Classification efficiency of LassoNet model in image recognition\",\"authors\":\"Xingkai Wen, Zhiji Yang\",\"doi\":\"10.1109/AEMCSE55572.2022.00083\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"LassoNet is a neural network framework proposed by Robert Tibshirani et al. and published in the \\\"Journal of Machine Learning Research\\\" in 2021. The model generalizes the existing Lasso regression and its feature sparsity to a feedforward neural network, and performs feature selection and parameter learning at the same time under the premise of unknown optimal number of selected features. In order to verify whether the classification efficiency of LassoNet is efficient, LassoNet is first compared with four shallow learning methods (logistic regression, Fisher linear discriminant, random forest and support vector machine) and three deep learning methods (CNN, Inception and Residual Module), respectively. For the classification of high-dimensional and large-sample datasets in five different fields, the experimental results show that LassoNet has a significant classification effect, which is significantly better than the general shallow learning method, and is comparable to the deep learning method. It can be seen that LassoNet has strong versatility and It is easy to use, but it takes a lot of time to run. In the follow-up work, the feedforward neural network can be optimized or replaced to further improve the classification efficiency.\",\"PeriodicalId\":309096,\"journal\":{\"name\":\"2022 5th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)\",\"volume\":\"75 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 5th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AEMCSE55572.2022.00083\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 5th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AEMCSE55572.2022.00083","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
LassoNet是Robert Tibshirani等人提出的神经网络框架,于2021年发表在《Journal of Machine Learning Research》上。该模型将现有的Lasso回归及其特征稀疏性推广到一个前馈神经网络中,在未知所选特征最优个数的前提下,同时进行特征选择和参数学习。为了验证LassoNet的分类效率是否有效,首先将LassoNet分别与四种浅学习方法(logistic回归、Fisher线性判别、随机森林和支持向量机)和三种深度学习方法(CNN、Inception和Residual Module)进行比较。对于5个不同领域的高维大样本数据集的分类,实验结果表明,LassoNet具有显著的分类效果,明显优于一般的浅学习方法,与深度学习方法相当。可以看出,LassoNet通用性强,使用方便,但是运行起来要花很多时间。在后续工作中,可以对前馈神经网络进行优化或替换,进一步提高分类效率。
Classification efficiency of LassoNet model in image recognition
LassoNet is a neural network framework proposed by Robert Tibshirani et al. and published in the "Journal of Machine Learning Research" in 2021. The model generalizes the existing Lasso regression and its feature sparsity to a feedforward neural network, and performs feature selection and parameter learning at the same time under the premise of unknown optimal number of selected features. In order to verify whether the classification efficiency of LassoNet is efficient, LassoNet is first compared with four shallow learning methods (logistic regression, Fisher linear discriminant, random forest and support vector machine) and three deep learning methods (CNN, Inception and Residual Module), respectively. For the classification of high-dimensional and large-sample datasets in five different fields, the experimental results show that LassoNet has a significant classification effect, which is significantly better than the general shallow learning method, and is comparable to the deep learning method. It can be seen that LassoNet has strong versatility and It is easy to use, but it takes a lot of time to run. In the follow-up work, the feedforward neural network can be optimized or replaced to further improve the classification efficiency.