Auto-Encoding Neural Tucker Factorization

IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Peng Tang;Xin Luo;Jim Woodcock
{"title":"Auto-Encoding Neural Tucker Factorization","authors":"Peng Tang;Xin Luo;Jim Woodcock","doi":"10.1109/TKDE.2025.3590198","DOIUrl":null,"url":null,"abstract":"Low-rank latent factorization of tensors is a powerful method for analyzing high-dimensional and incomplete (HDI) data derived from cyber-physical systems, particularly when computational resources are limited. However, traditional tensor factorization models are inherently linear and struggle to capture the complex nonlinear spatiotemporal dependencies embedded in the data. This paper introduces a novel latent factorization model, namely <underline>A</u>uto-encoding <underline>N</u>eural <underline>Tuc</u>ker <underline>F</u>actorization (ANTucF) for accurate spatiotemporal representation learning on the HDI tensor. It constructs a low-rank Tucker factorization-based neural network to capture a potential latent manifold in space and time, built upon three core ideas: a) applying density-oriented modeling principles with neural networks to facilitate latent feature learning via positional and temporal encoding of mode indices; b) constructing a Tucker interaction tensor to represent all possible spatiotemporal interactions among distinct spatial and temporal modes; and c) enhancing the uniqueness of the core tensor in Tucker factorization by incorporating nonlinear spatiotemporal representation learning via auto-encoding latent interaction learning. The ANTucF model outperforms several state-of-the-art LFT models in estimating missing observations on real-world datasets. Additionally, visualizations demonstrate its ability to capture finer spatiotemporal dynamics by nonlinearly exploiting an optimal Tucker core tensor using a data-driven approach.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 10","pages":"5795-5807"},"PeriodicalIF":10.4000,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Knowledge and Data Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11082558/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Low-rank latent factorization of tensors is a powerful method for analyzing high-dimensional and incomplete (HDI) data derived from cyber-physical systems, particularly when computational resources are limited. However, traditional tensor factorization models are inherently linear and struggle to capture the complex nonlinear spatiotemporal dependencies embedded in the data. This paper introduces a novel latent factorization model, namely Auto-encoding Neural Tucker Factorization (ANTucF) for accurate spatiotemporal representation learning on the HDI tensor. It constructs a low-rank Tucker factorization-based neural network to capture a potential latent manifold in space and time, built upon three core ideas: a) applying density-oriented modeling principles with neural networks to facilitate latent feature learning via positional and temporal encoding of mode indices; b) constructing a Tucker interaction tensor to represent all possible spatiotemporal interactions among distinct spatial and temporal modes; and c) enhancing the uniqueness of the core tensor in Tucker factorization by incorporating nonlinear spatiotemporal representation learning via auto-encoding latent interaction learning. The ANTucF model outperforms several state-of-the-art LFT models in estimating missing observations on real-world datasets. Additionally, visualizations demonstrate its ability to capture finer spatiotemporal dynamics by nonlinearly exploiting an optimal Tucker core tensor using a data-driven approach.
自编码神经塔克分解
张量的低秩潜在分解是一种分析来自网络物理系统的高维和不完整(HDI)数据的有效方法,特别是在计算资源有限的情况下。然而,传统的张量分解模型本质上是线性的,难以捕捉嵌入数据中的复杂非线性时空依赖关系。本文介绍了一种新的潜在分解模型,即自编码神经塔克分解(ANTucF),用于HDI张量的精确时空表征学习。基于以下三个核心思想,构建了一个基于低秩Tucker分解的神经网络来捕获空间和时间上的潜在潜流形:a)将面向密度的建模原理与神经网络结合起来,通过模式指标的位置和时间编码来促进潜在特征的学习;b)构建Tucker相互作用张量,表示不同时空模式之间所有可能的时空相互作用;c)通过自编码潜在交互学习,结合非线性时空表征学习,增强Tucker分解中核心张量的唯一性。ANTucF模型在估计真实数据集上缺失的观测值方面优于几个最先进的LFT模型。此外,可视化展示了它通过使用数据驱动的方法非线性地利用最优Tucker核心张量来捕获更精细的时空动态的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering 工程技术-工程:电子与电气
CiteScore
11.70
自引率
3.40%
发文量
515
审稿时长
6 months
期刊介绍: The IEEE Transactions on Knowledge and Data Engineering encompasses knowledge and data engineering aspects within computer science, artificial intelligence, electrical engineering, computer engineering, and related fields. It provides an interdisciplinary platform for disseminating new developments in knowledge and data engineering and explores the practicality of these concepts in both hardware and software. Specific areas covered include knowledge-based and expert systems, AI techniques for knowledge and data management, tools, and methodologies, distributed processing, real-time systems, architectures, data management practices, database design, query languages, security, fault tolerance, statistical databases, algorithms, performance evaluation, and applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信