MDSR-NMF:用于非负矩阵分解的多重解构单重构深度神经网络模型。

IF 1.1 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Network-Computation in Neural Systems Pub Date : 2023-02-01 Epub Date: 2023-11-09 DOI:10.1080/0954898X.2023.2257773
Prasun Dutta, Rajat K De
{"title":"MDSR-NMF:用于非负矩阵分解的多重解构单重构深度神经网络模型。","authors":"Prasun Dutta, Rajat K De","doi":"10.1080/0954898X.2023.2257773","DOIUrl":null,"url":null,"abstract":"<p><p>Dimension reduction is one of the most sought-after strategies to cope with high-dimensional ever-expanding datasets. To address this, a novel deep-learning architecture has been designed with multiple deconstruction and single reconstruction layers for non-negative matrix factorization aimed at low-rank approximation. This design ensures that the reconstructed input matrix has a unique pair of factor matrices. The two-stage approach, namely, pretraining and stacking, aids in the robustness of the architecture. The sigmoid function has been adjusted in such a way that fulfils the non-negativity criteria and also helps to alleviate the data-loss problem. Xavier initialization technique aids in the solution of the exploding or vanishing gradient problem. The objective function involves regularizer that ensures the best possible approximation of the input matrix. The superior performance of MDSR-NMF, over six well-known dimension reduction methods, has been demonstrated extensively using five datasets for classification and clustering. Computational complexity and convergence analysis have also been presented to establish the model.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MDSR-NMF: Multiple deconstruction single reconstruction deep neural network model for non-negative matrix factorization.\",\"authors\":\"Prasun Dutta, Rajat K De\",\"doi\":\"10.1080/0954898X.2023.2257773\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Dimension reduction is one of the most sought-after strategies to cope with high-dimensional ever-expanding datasets. To address this, a novel deep-learning architecture has been designed with multiple deconstruction and single reconstruction layers for non-negative matrix factorization aimed at low-rank approximation. This design ensures that the reconstructed input matrix has a unique pair of factor matrices. The two-stage approach, namely, pretraining and stacking, aids in the robustness of the architecture. The sigmoid function has been adjusted in such a way that fulfils the non-negativity criteria and also helps to alleviate the data-loss problem. Xavier initialization technique aids in the solution of the exploding or vanishing gradient problem. The objective function involves regularizer that ensures the best possible approximation of the input matrix. The superior performance of MDSR-NMF, over six well-known dimension reduction methods, has been demonstrated extensively using five datasets for classification and clustering. Computational complexity and convergence analysis have also been presented to establish the model.</p>\",\"PeriodicalId\":54735,\"journal\":{\"name\":\"Network-Computation in Neural Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2023-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Network-Computation in Neural Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1080/0954898X.2023.2257773\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/11/9 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Network-Computation in Neural Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/0954898X.2023.2257773","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/11/9 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

降维是应对不断扩展的高维数据集的最受欢迎的策略之一。为了解决这一问题,设计了一种新的深度学习架构,该架构具有多个解构层和单个重建层,用于低秩近似的非负矩阵分解。这种设计确保了重构的输入矩阵具有唯一的一对因子矩阵。两阶段方法,即预训练和堆叠,有助于架构的稳健性。sigmoid函数已经以满足非负性标准的方式进行了调整,也有助于缓解数据丢失问题。Xavier初始化技术有助于解决爆炸或消失梯度问题。目标函数包含正则化子,确保输入矩阵的最佳逼近。与六种众所周知的降维方法相比,MDSR-NMF的优越性能已经通过使用五个数据集进行分类和聚类得到了广泛证明。计算复杂度和收敛性分析也被用来建立模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MDSR-NMF: Multiple deconstruction single reconstruction deep neural network model for non-negative matrix factorization.

Dimension reduction is one of the most sought-after strategies to cope with high-dimensional ever-expanding datasets. To address this, a novel deep-learning architecture has been designed with multiple deconstruction and single reconstruction layers for non-negative matrix factorization aimed at low-rank approximation. This design ensures that the reconstructed input matrix has a unique pair of factor matrices. The two-stage approach, namely, pretraining and stacking, aids in the robustness of the architecture. The sigmoid function has been adjusted in such a way that fulfils the non-negativity criteria and also helps to alleviate the data-loss problem. Xavier initialization technique aids in the solution of the exploding or vanishing gradient problem. The objective function involves regularizer that ensures the best possible approximation of the input matrix. The superior performance of MDSR-NMF, over six well-known dimension reduction methods, has been demonstrated extensively using five datasets for classification and clustering. Computational complexity and convergence analysis have also been presented to establish the model.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Network-Computation in Neural Systems
Network-Computation in Neural Systems 工程技术-工程:电子与电气
CiteScore
3.70
自引率
1.30%
发文量
22
审稿时长
>12 weeks
期刊介绍: Network: Computation in Neural Systems welcomes submissions of research papers that integrate theoretical neuroscience with experimental data, emphasizing the utilization of cutting-edge technologies. We invite authors and researchers to contribute their work in the following areas: Theoretical Neuroscience: This section encompasses neural network modeling approaches that elucidate brain function. Neural Networks in Data Analysis and Pattern Recognition: We encourage submissions exploring the use of neural networks for data analysis and pattern recognition, including but not limited to image analysis and speech processing applications. Neural Networks in Control Systems: This category encompasses the utilization of neural networks in control systems, including robotics, state estimation, fault detection, and diagnosis. Analysis of Neurophysiological Data: We invite submissions focusing on the analysis of neurophysiology data obtained from experimental studies involving animals. Analysis of Experimental Data on the Human Brain: This section includes papers analyzing experimental data from studies on the human brain, utilizing imaging techniques such as MRI, fMRI, EEG, and PET. Neurobiological Foundations of Consciousness: We encourage submissions exploring the neural bases of consciousness in the brain and its simulation in machines.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信