{"title":"Graph Diffusion-Based Representation Learning for Sequential Recommendation","authors":"Zhaobo Wang;Yanmin Zhu;Chunyang Wang;Xuhao Zhao;Bo Li;Jiadi Yu;Feilong Tang","doi":"10.1109/TKDE.2024.3477621","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3477621","url":null,"abstract":"Sequential recommendation is a critical part of the flourishing online applications by suggesting appealing items on users’ next interactions, where global dependencies among items have proven to be indispensable for enhancing the quality of item representations toward a better understanding of user dynamic preferences. Existing methods rely on pre-defined graphs with shallow Graph Neural Networks to capture such necessary dependencies due to the constraint of the over-smoothing problem. However, this graph representation learning paradigm makes them difficult to satisfy the original expectation because of noisy graph structures and the limited ability of shallow architectures for modeling high-order relations. In this paper, we propose a novel Graph Diffusion Representation-enhanced Attention Network for sequential recommendation, which explores the construction of deeper networks by utilizing graph diffusion on adaptive graph structures for generating expressive item representations. Specifically, we design an adaptive graph generation strategy via leveraging similarity learning between item embeddings, automatically optimizing the input graph topology under the guidance of downstream recommendation tasks. Afterward, we propose a novel graph diffusion paradigm with robustness to over-smoothing, which enriches the learned item representations with sufficient global dependencies for attention-based sequential modeling. Moreover, extensive experiments demonstrate the effectiveness of our approach over state-of-the-art baselines.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8395-8407"},"PeriodicalIF":8.9,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qianli Ma;Zhen Liu;Zhenjing Zheng;Ziyang Huang;Siying Zhu;Zhongzhong Yu;James T. Kwok
{"title":"A Survey on Time-Series Pre-Trained Models","authors":"Qianli Ma;Zhen Liu;Zhenjing Zheng;Ziyang Huang;Siying Zhu;Zhongzhong Yu;James T. Kwok","doi":"10.1109/TKDE.2024.3475809","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3475809","url":null,"abstract":"Time-Series Mining (TSM) is an important research area since it shows great potential in practical applications. Deep learning models that rely on massive labeled data have been utilized for TSM successfully. However, constructing a large-scale well-labeled dataset is difficult due to data annotation costs. Recently, pre-trained models have gradually attracted attention in the time series domain due to their remarkable performance in computer vision and natural language processing. In this survey, we provide a comprehensive review of Time-Series Pre-Trained Models (TS-PTMs), aiming to guide the understanding, applying, and studying TS-PTMs. Specifically, we first briefly introduce the typical deep learning models employed in TSM. Then, we give an overview of TS-PTMs according to the pre-training techniques. The main categories we explore include supervised, unsupervised, and self-supervised TS-PTMs. Further, extensive experiments involving 27 methods, 434 datasets, and 679 transfer learning scenarios are conducted to analyze the advantages and disadvantages of transfer learning strategies, Transformer-based models, and representative TS-PTMs. Finally, we point out some potential directions of TS-PTMs for future work.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"7536-7555"},"PeriodicalIF":8.9,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuhui Li;Zhen Fang;Yonggang Zhang;Ning Ma;Jiajun Bu;Bo Han;Haishuai Wang
{"title":"Characterizing Submanifold Region for Out-of-Distribution Detection","authors":"Xuhui Li;Zhen Fang;Yonggang Zhang;Ning Ma;Jiajun Bu;Bo Han;Haishuai Wang","doi":"10.1109/TKDE.2024.3468629","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3468629","url":null,"abstract":"Detecting out-of-distribution (OOD) samples poses a significant safety challenge when deploying models in open-world scenarios. Advanced works assume that OOD and in-distributional (ID) samples exhibit a distribution discrepancy, showing an encouraging direction in estimating the uncertainty with embedding features or predicting outputs. Besides incorporating auxiliary outlier as decision boundary, quantifying a “meaningful distance” in embedding space as uncertainty measurement is a promising strategy. However, these distances-based approaches overlook the data structure and heavily rely on the high-dimension features learned by deep neural networks, causing unreliable distances due to the “curse of dimensionality”. In this work, we propose a data structure-aware approach to mitigate the sensitivity of distances to the “curse of dimensionality”, where high-dimensional features are mapped to the manifold of ID samples, leveraging the well-known manifold assumption. Specifically, we present a novel distance termed as \u0000<i>tangent distance</i>\u0000, which tackles the issue of generalizing the meaningfulness of distances on testing samples to detect OOD inputs. Inspired by manifold learning for adversarial examples, where adversarial region probability density is close to the orthogonal direction of the manifold, and both OOD and adversarial samples have common characteristic \u0000<inline-formula><tex-math>$-$</tex-math></inline-formula>\u0000 imperceptible perturbations with shift distribution, we propose that OOD samples are relatively far away from the ID manifold, where \u0000<i>tangent distance</i>\u0000 directly computes the Euclidean distance between samples and the nearest submanifold space \u0000<inline-formula><tex-math>$-$</tex-math></inline-formula>\u0000 instantiated as the linear approximation of local region on the manifold. We provide empirical and theoretical insights to demonstrate the effectiveness of OOD uncertainty measurements on the low-dimensional subspace. Extensive experiments show that the \u0000<i>tangent distance</i>\u0000 performs competitively with other post hoc OOD detection baselines on common and large-scale benchmarks, and the theoretical analysis supports our claim that ID samples are likely to reside in high-density regions, explaining the effectiveness of internal connections among ID data.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"130-147"},"PeriodicalIF":8.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142797994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning to Denoise Biomedical Knowledge Graph for Robust Molecular Interaction Prediction","authors":"Tengfei Ma;Yujie Chen;Wen Tao;Dashun Zheng;Xuan Lin;Patrick Cheong-Iao Pang;Yiping Liu;Yijun Wang;Longyue Wang;Bosheng Song;Xiangxiang Zeng;Philip S. Yu","doi":"10.1109/TKDE.2024.3471508","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3471508","url":null,"abstract":"Molecular interaction prediction plays a crucial role in forecasting unknown interactions between molecules, such as drug-target interaction (DTI) and drug-drug interaction (DDI), which are essential in the field of drug discovery and therapeutics. Although previous prediction methods have yielded promising results by leveraging the rich semantics and topological structure of biomedical knowledge graphs (KGs), they have primarily focused on enhancing predictive performance without addressing the presence of inevitable noise and inconsistent semantics. This limitation has hindered the advancement of KG-based prediction methods. To address this limitation, we propose BioKDN (\u0000<bold>Bio</b>\u0000medical \u0000<bold>K</b>\u0000nowledge Graph \u0000<bold>D</b>\u0000enoising \u0000<bold>N</b>\u0000etwork) for robust molecular interaction prediction. BioKDN refines the reliable structure of local subgraphs by denoising noisy links in a learnable manner, providing a general module for extracting task-relevant interactions. To enhance the reliability of the refined structure, BioKDN maintains consistent and robust semantics by smoothing relations around the target interaction. By maximizing the mutual information between reliable structure and smoothed relations, BioKDN emphasizes informative semantics to enable precise predictions. Experimental results on real-world datasets show that BioKDN surpasses state-of-the-art models in DTI and DDI prediction tasks, confirming the effectiveness and robustness of BioKDN in denoising unreliable interactions within contaminated KGs.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8682-8694"},"PeriodicalIF":8.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ThreatInsight: Innovating Early Threat Detection Through Threat-Intelligence-Driven Analysis and Attribution","authors":"Ziyu Wang;Yinghai Zhou;Hao Liu;Jing Qiu;Binxing Fang;Zhihong Tian","doi":"10.1109/TKDE.2024.3474792","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3474792","url":null,"abstract":"The complexity and ongoing evolution of Advanced Persistent Threats (APTs) compromise the efficacy of conventional cybersecurity measures. Firewalls, intrusion detection systems, and antivirus software, which are dependent on static rules and predefined signatures, are increasingly ineffective against these sophisticated threats. Moreover, the use of system audit logs for threat hunting involves a retrospective review of cybersecurity incidents to reconstruct attack paths for attribution, which affects the timeliness and effectiveness of threat detection and response. Even when the attacker is identified, this method does not prevent cyber attacks. To address these challenges, we introduce ThreatInsight, a novel early-stage threat detection solution that minimizes reliance on system audit logs. ThreatInsight detects potential threats by analyzing IPs captured from HoneyPoints. These IPs are processed through threat data mining and threat feature modeling. By employing fact-based and semantic reasoning techniques based on the APT Threat Intelligence Knowledge Graph (APT-TI-KG), ThreatInsight identifies and attributes attackers. The system generates analysis reports detailing the threat knowledge concerning IPs and attributed attackers, equipping analysts with actionable insights and defense strategies. The system architecture includes modules for HoneyPoint IP extraction, Threat Intelligence (TI) data analysis, attacker attribution, and analysis report generation. ThreatInsight facilitates real-time analysis and the identification of potential threats at early stages, thereby enhancing the early detection capabilities of cybersecurity defense systems and improving overall threat detection and proactive defense effectiveness.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"9388-9402"},"PeriodicalIF":8.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tingxuan Chen;Jun Long;Zidong Wang;Shuai Luo;Jincai Huang;Liu Yang
{"title":"THCN: A Hawkes Process Based Temporal Causal Convolutional Network for Extrapolation Reasoning in Temporal Knowledge Graphs","authors":"Tingxuan Chen;Jun Long;Zidong Wang;Shuai Luo;Jincai Huang;Liu Yang","doi":"10.1109/TKDE.2024.3474051","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3474051","url":null,"abstract":"Temporal Knowledge Graphs (TKGs) serve as indispensable tools for dynamic facts storage and reasoning. However, predicting future facts in TKGs presents a formidable challenge due to the unknowable nature of future facts. Existing temporal reasoning models depend on fact recurrence and periodicity, leading to information degradation over prolonged temporal evolution. In particular, the occurrence of one fact may influence the likelihood of another. To this end, we propose THCN, a novel Temporal Causal Convolutional Network based on Hawkes processes, designed for temporal reasoning under the extrapolation setting. Specifically, THCN harnesses a temporal causal convolutional network with dilated factors to capture historical dependencies among facts spanning diverse time intervals. Then, we construct a conditional intensity function based on Hawkes processes for fitting the likelihood of fact occurrence. Importantly, THCN pioneers a dual-level dynamic modeling mechanism, enabling the simultaneous capture of the collective features of nodes and the individual characteristics of facts. Extensive experiments on six real-world TKG datasets demonstrate our method significantly outperforms the state-of-the-art across all four evaluation metrics, indicating that THCN is more applicable for extrapolation reasoning in TKGs.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"9374-9387"},"PeriodicalIF":8.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengyuan Li;Xiaohong Zhang;Jiaoyan Shang;Yingcang Ma
{"title":"General Quasi Overlap Functions and Fuzzy Neighborhood Systems-Based Fuzzy Rough Sets With Their Applications","authors":"Mengyuan Li;Xiaohong Zhang;Jiaoyan Shang;Yingcang Ma","doi":"10.1109/TKDE.2024.3474728","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3474728","url":null,"abstract":"Fuzzy rough sets are important mathematical tool for processing data using existing knowledge. Fuzzy rough sets have been widely studied and used into various fields, such as data reduction and image processing, etc. In extensive literature we have studied, general quasi overlap functions and fuzzy neighborhood systems are broader than other all fuzzy operators and knowledge used in existing fuzzy rough sets, respectively. In this article, a novel fuzzy rough sets model (shortly (\u0000<italic>I</i>\u0000, \u0000<italic>Q</i>\u0000, \u0000<italic>NS</i>\u0000)-fuzzy rough sets) is proposed using fuzzy implications, general quasi overlap functions and fuzzy neighborhood systems, which contains almost all existing fuzzy rough sets. Then, a novel feature selection algorithm (called IQNS-FS algorithm) is proposed and implemented using (\u0000<italic>I</i>\u0000, \u0000<italic>Q</i>\u0000, \u0000<italic>NS</i>\u0000)-fuzzy rough sets, dependency and specificity measure. The results of 12 datasets indicate that IQNS-FS algorithm performs better than others. Finally, we input the results of IQNS-FS algorithm into single hidden layer neural networks and other classification algorithms, the results illustrate that the IQNS-FS algorithm can be better connected with neural networks than other classification algorithms. The high classification accuracy of single hidden layer neural networks (a very simple structure) further shows that the attributes selected by the IQNS-FS algorithm are important which can express the features of the datasets.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8349-8361"},"PeriodicalIF":8.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison Queries Generation Using Mathematical Programming for Exploratory Data Analysis","authors":"Alexandre Chanson;Nicolas Labroche;Patrick Marcel;Vincent T'Kindt","doi":"10.1109/TKDE.2024.3474828","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3474828","url":null,"abstract":"Exploratory Data Analysis (EDA) is the interactive process of gaining insights from a dataset. Comparisons are popular insights that can be specified with comparison queries, i.e., specifications of the comparison of subsets of data. In this work, we consider the problem of automatically computing sequences of comparison queries that are coherent, significant and whose overall cost is bounded. Such an automation is usually done by either generating all insights and solving a multi-criteria optimization problem, or using reinforcement learning. In the first case, a large search space has to be explored using exponential algorithms or dedicated heuristics. In the second case, a dataset-specific, time and energy-consuming training, is necessary. We contribute with a novel approach, consisting of decomposing the optimization problem in two: the original problem, that is solved over a smaller search space, and a new problem of generating comparison queries, aiming at generating only queries improving existing solutions of the first problem. This allows to explore only a portion of the search space, without resorting to reinforcement learning. We show that this approach is effective, in that it finds good solutions to the original multi-criteria optimization problem, and efficient, allowing to generate sequences of comparisons in reasonable time.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"7792-7804"},"PeriodicalIF":8.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient and Effective Augmentation Framework With Latent Mixup and Label-Guided Contrastive Learning for Graph Classification","authors":"Aoting Zeng;Liping Wang;Wenjie Zhang;Xuemin Lin","doi":"10.1109/TKDE.2024.3471659","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3471659","url":null,"abstract":"Graph Neural Networks (GNNs) with data augmentation obtain promising results among existing solutions for graph classification. Mixup-based augmentation methods for graph classification have already achieved state-of-the-art performance. However, existing mixup-based augmentation methods either operate in the input space and thus face the challenge of balancing efficiency and accuracy, or directly conduct mixup in the latent space without similarity guarantee, thus leading to lacking semantic validity and limited performance. To address these limitations, this paper proposes \u0000<inline-formula><tex-math>$mathcal {G}$</tex-math></inline-formula>\u0000-MixCon, a novel framework leveraging the strengths of \u0000<i><u>Mix</u></i>\u0000up-based augmentation and supervised \u0000<i><u>Con</u></i>\u0000trastive learning (SCL). To the best of our knowledge, this is the first attempt to develop an SCL-based approach for learning graph representations. Specifically, the mixup-based strategy within the latent space named \u0000<inline-formula><tex-math>$GDA_{gl}$</tex-math></inline-formula>\u0000 and \u0000<inline-formula><tex-math>$GDA_{nl}$</tex-math></inline-formula>\u0000 are proposed, which efficiently conduct linear interpolation between views of the node or graph level. Furthermore, we design a dual-objective loss function named \u0000<i>SupMixCon</i>\u0000 that can consider both the consistency among graphs and the distances between the original and augmented graph. \u0000<i>SupMixCon</i>\u0000 can guide the training process for SCL in \u0000<inline-formula><tex-math>$mathcal {G}$</tex-math></inline-formula>\u0000-MixCon while achieving a similarity guarantee. Comprehensive experiments are conducted on various real-world datasets, the results show that \u0000<inline-formula><tex-math>$mathcal {G}$</tex-math></inline-formula>\u0000-MixCon demonstrably enhances performance, achieving an average accuracy increment of 6.24%, and significantly increases the robustness of GNNs against noisy labels.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8066-8078"},"PeriodicalIF":8.9,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bowen Jin;Gang Liu;Chi Han;Meng Jiang;Heng Ji;Jiawei Han
{"title":"Large Language Models on Graphs: A Comprehensive Survey","authors":"Bowen Jin;Gang Liu;Chi Han;Meng Jiang;Heng Ji;Jiawei Han","doi":"10.1109/TKDE.2024.3469578","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3469578","url":null,"abstract":"Large language models (LLMs), such as GPT4 and LLaMA, are creating significant advancements in natural language processing, due to their strong text encoding/decoding ability and newly found emergent capability (e.g., reasoning). While LLMs are mainly designed to process pure texts, there are many real-world scenarios where text data is associated with rich structure information in the form of graphs (e.g., academic networks, and e-commerce networks) or scenarios where graph data is paired with rich textual information (e.g., molecules with descriptions). Besides, although LLMs have shown their pure text-based reasoning ability, it is underexplored whether such ability can be generalized to graphs (i.e., graph-based reasoning). In this paper, we provide a systematic review of scenarios and techniques related to large language models on graphs. We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs. We then discuss detailed techniques for utilizing LLMs on graphs, including LLM as Predictor, LLM as Encoder, and LLM as Aligner, and compare the advantages and disadvantages of different schools of models. Furthermore, we discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets. Finally, we conclude with potential future research directions in this fast-growing field.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8622-8642"},"PeriodicalIF":8.9,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}