{"title":"针对高维稀疏数据的图并入潜在因素分析模型","authors":"Di Wu;Yi He;Xin Luo","doi":"10.1109/TETC.2023.3292866","DOIUrl":null,"url":null,"abstract":"A High-dimensional and \n<underline>s</u>\nparse (HiDS) matrix is frequently encountered in Big Data-related applications such as e-commerce systems or wireless sensor networks. It is of great significance to perform highly accurate representation learning on an HiDS matrix due to the great desires of extracting latent knowledge from it. \n<underline>L</u>\natent \n<underline>f</u>\nactor \n<underline>a</u>\nnalysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models directly perform such embeddings on an HiDS matrix without exploiting its hidden graph structures, resulting in accuracy loss. To aid this issue, this paper proposes a \n<underline>g</u>\nraph-incorporated \n<underline>l</u>\natent \n<underline>f</u>\nactor \n<underline>a</u>\nnalysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden \n<underline>h</u>\nigh-\n<underline>o</u>\nrder \n<underline>i</u>\nnteraction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representation learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"907-917"},"PeriodicalIF":5.1000,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Graph-Incorporated Latent Factor Analysis Model for High-Dimensional and Sparse Data\",\"authors\":\"Di Wu;Yi He;Xin Luo\",\"doi\":\"10.1109/TETC.2023.3292866\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A High-dimensional and \\n<underline>s</u>\\nparse (HiDS) matrix is frequently encountered in Big Data-related applications such as e-commerce systems or wireless sensor networks. It is of great significance to perform highly accurate representation learning on an HiDS matrix due to the great desires of extracting latent knowledge from it. \\n<underline>L</u>\\natent \\n<underline>f</u>\\nactor \\n<underline>a</u>\\nnalysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models directly perform such embeddings on an HiDS matrix without exploiting its hidden graph structures, resulting in accuracy loss. To aid this issue, this paper proposes a \\n<underline>g</u>\\nraph-incorporated \\n<underline>l</u>\\natent \\n<underline>f</u>\\nactor \\n<underline>a</u>\\nnalysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden \\n<underline>h</u>\\nigh-\\n<underline>o</u>\\nrder \\n<underline>i</u>\\nnteraction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representation learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.\",\"PeriodicalId\":13156,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computing\",\"volume\":\"11 4\",\"pages\":\"907-917\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2023-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10179251/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10179251/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
A Graph-Incorporated Latent Factor Analysis Model for High-Dimensional and Sparse Data
A High-dimensional and
s
parse (HiDS) matrix is frequently encountered in Big Data-related applications such as e-commerce systems or wireless sensor networks. It is of great significance to perform highly accurate representation learning on an HiDS matrix due to the great desires of extracting latent knowledge from it.
L
atent
f
actor
a
nalysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models directly perform such embeddings on an HiDS matrix without exploiting its hidden graph structures, resulting in accuracy loss. To aid this issue, this paper proposes a
g
raph-incorporated
l
atent
f
actor
a
nalysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden
h
igh-
o
rder
i
nteraction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representation learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.
期刊介绍:
IEEE Transactions on Emerging Topics in Computing publishes papers on emerging aspects of computer science, computing technology, and computing applications not currently covered by other IEEE Computer Society Transactions. Some examples of emerging topics in computing include: IT for Green, Synthetic and organic computing structures and systems, Advanced analytics, Social/occupational computing, Location-based/client computer systems, Morphic computer design, Electronic game systems, & Health-care IT.