L1/2 Regularization-Based Deep Incremental Non-negative Matrix Factorization for Tumor Recognition

Lulu Yan, Xiaohui Yang
{"title":"L1/2 Regularization-Based Deep Incremental Non-negative Matrix Factorization for Tumor Recognition","authors":"Lulu Yan, Xiaohui Yang","doi":"10.1145/3469678.3469691","DOIUrl":null,"url":null,"abstract":"Non-negative matrix factorization (NMF) is an effective technique for feature representation learning and dimensionality reduction. However, there are two critical challenges for improving the performance of NMF-based methods. One is the sparsity of representation, the other is the sensitivity to the initial value of the iteration, which seriously affects the performance of NMF. To solve the problems, L1/2 regularization is skillfully selected to characterize the sparsity of the data. Furthermore, a layer-wise pre-training strategy in deep learning is used to alleviate the effect of the initial value on NMF, whereby complex network structure is avoided. As such, a L1/2 regularization-based deep NMF (L1/2-DNMF) model is proposed in this study, such that a more stable and sparse deep representation is obtained. Moreover, incremental learning is introduced to reduce the high computational complexity of L1/2-DNMF model, called L1/2-DINMF model, which is suitable for online processing. Experiment results on genetic data-based tumor recognition verify that the proposed L1/2-DINMF model outperforms the classic and state-of-the-art methods.","PeriodicalId":22513,"journal":{"name":"The Fifth International Conference on Biological Information and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Fifth International Conference on Biological Information and Biomedical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3469678.3469691","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Non-negative matrix factorization (NMF) is an effective technique for feature representation learning and dimensionality reduction. However, there are two critical challenges for improving the performance of NMF-based methods. One is the sparsity of representation, the other is the sensitivity to the initial value of the iteration, which seriously affects the performance of NMF. To solve the problems, L1/2 regularization is skillfully selected to characterize the sparsity of the data. Furthermore, a layer-wise pre-training strategy in deep learning is used to alleviate the effect of the initial value on NMF, whereby complex network structure is avoided. As such, a L1/2 regularization-based deep NMF (L1/2-DNMF) model is proposed in this study, such that a more stable and sparse deep representation is obtained. Moreover, incremental learning is introduced to reduce the high computational complexity of L1/2-DNMF model, called L1/2-DINMF model, which is suitable for online processing. Experiment results on genetic data-based tumor recognition verify that the proposed L1/2-DINMF model outperforms the classic and state-of-the-art methods.
基于L1/2正则化的深度增量非负矩阵分解肿瘤识别
非负矩阵分解(NMF)是一种有效的特征表示学习和降维技术。然而,提高基于nmf的方法的性能存在两个关键挑战。一个是表示的稀疏性,另一个是对迭代初值的敏感性,这严重影响了NMF的性能。为了解决这些问题,我们巧妙地选择了L1/2正则化来表征数据的稀疏性。此外,采用深度学习中的分层预训练策略减轻了初始值对NMF的影响,从而避免了复杂的网络结构。因此,本研究提出了一种基于L1/2正则化的深度NMF (L1/2- dnmf)模型,从而获得更加稳定和稀疏的深度表示。此外,为了降低L1/2-DNMF模型较高的计算复杂度,引入了增量学习,称为L1/2-DINMF模型,适合在线处理。基于遗传数据的肿瘤识别实验结果验证了所提出的L1/2-DINMF模型优于经典和最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信