针对高维稀疏数据的图并入潜在因素分析模型

IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Di Wu;Yi He;Xin Luo
{"title":"针对高维稀疏数据的图并入潜在因素分析模型","authors":"Di Wu;Yi He;Xin Luo","doi":"10.1109/TETC.2023.3292866","DOIUrl":null,"url":null,"abstract":"A High-dimensional and \n<underline>s</u>\nparse (HiDS) matrix is frequently encountered in Big Data-related applications such as e-commerce systems or wireless sensor networks. It is of great significance to perform highly accurate representation learning on an HiDS matrix due to the great desires of extracting latent knowledge from it. \n<underline>L</u>\natent \n<underline>f</u>\nactor \n<underline>a</u>\nnalysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models directly perform such embeddings on an HiDS matrix without exploiting its hidden graph structures, resulting in accuracy loss. To aid this issue, this paper proposes a \n<underline>g</u>\nraph-incorporated \n<underline>l</u>\natent \n<underline>f</u>\nactor \n<underline>a</u>\nnalysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden \n<underline>h</u>\nigh-\n<underline>o</u>\nrder \n<underline>i</u>\nnteraction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representation learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"907-917"},"PeriodicalIF":5.1000,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Graph-Incorporated Latent Factor Analysis Model for High-Dimensional and Sparse Data\",\"authors\":\"Di Wu;Yi He;Xin Luo\",\"doi\":\"10.1109/TETC.2023.3292866\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A High-dimensional and \\n<underline>s</u>\\nparse (HiDS) matrix is frequently encountered in Big Data-related applications such as e-commerce systems or wireless sensor networks. It is of great significance to perform highly accurate representation learning on an HiDS matrix due to the great desires of extracting latent knowledge from it. \\n<underline>L</u>\\natent \\n<underline>f</u>\\nactor \\n<underline>a</u>\\nnalysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models directly perform such embeddings on an HiDS matrix without exploiting its hidden graph structures, resulting in accuracy loss. To aid this issue, this paper proposes a \\n<underline>g</u>\\nraph-incorporated \\n<underline>l</u>\\natent \\n<underline>f</u>\\nactor \\n<underline>a</u>\\nnalysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden \\n<underline>h</u>\\nigh-\\n<underline>o</u>\\nrder \\n<underline>i</u>\\nnteraction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representation learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.\",\"PeriodicalId\":13156,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computing\",\"volume\":\"11 4\",\"pages\":\"907-917\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2023-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10179251/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10179251/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 2

摘要

在电子商务系统或无线传感器网络等大数据相关应用中,经常会遇到高维稀疏(HiDS)矩阵。由于从 HiDS 矩阵中提取潜在知识的需求很大,因此对 HiDS 矩阵进行高精度表示学习具有重要意义。潜因分析(LFA)是解决这一问题的最有效和最高效的方法之一,它通过学习仅基于观察项的低秩嵌入来表示 HiDS 矩阵。然而,大多数现有的基于 LFA 的模型都是直接对 HiDS 矩阵进行嵌入,而没有利用其隐藏的图结构,从而导致准确率下降。为了解决这个问题,本文提出了一种图并入潜在因素分析(GLFA)模型。它采用了两方面的理念:1)构建一个图,用于识别 HiDS 矩阵所描述的节点间隐藏的高阶交互(HOI);2)结合 HOI 精心设计一个循环 LFA 结构,从而提高结果模型的表征学习能力。在三个实际数据集上的实验结果表明,GLFA 在预测 HiDS 矩阵的缺失数据方面优于六个最先进的模型,这充分证明了它对 HiDS 数据的强大表征学习能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Graph-Incorporated Latent Factor Analysis Model for High-Dimensional and Sparse Data
A High-dimensional and s parse (HiDS) matrix is frequently encountered in Big Data-related applications such as e-commerce systems or wireless sensor networks. It is of great significance to perform highly accurate representation learning on an HiDS matrix due to the great desires of extracting latent knowledge from it. L atent f actor a nalysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models directly perform such embeddings on an HiDS matrix without exploiting its hidden graph structures, resulting in accuracy loss. To aid this issue, this paper proposes a g raph-incorporated l atent f actor a nalysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden h igh- o rder i nteraction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representation learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Emerging Topics in Computing
IEEE Transactions on Emerging Topics in Computing Computer Science-Computer Science (miscellaneous)
CiteScore
12.10
自引率
5.10%
发文量
113
期刊介绍: IEEE Transactions on Emerging Topics in Computing publishes papers on emerging aspects of computer science, computing technology, and computing applications not currently covered by other IEEE Computer Society Transactions. Some examples of emerging topics in computing include: IT for Green, Synthetic and organic computing structures and systems, Advanced analytics, Social/occupational computing, Location-based/client computer systems, Morphic computer design, Electronic game systems, & Health-care IT.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信