IEEE Transactions on Knowledge and Data Engineering最新文献

筛选
英文 中文
Learning to Denoise Biomedical Knowledge Graph for Robust Molecular Interaction Prediction 学习去噪生物医学知识图谱,实现可靠的分子相互作用预测
IF 8.9 2区 计算机科学
IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-10-04 DOI: 10.1109/TKDE.2024.3471508
Tengfei Ma;Yujie Chen;Wen Tao;Dashun Zheng;Xuan Lin;Patrick Cheong-Iao Pang;Yiping Liu;Yijun Wang;Longyue Wang;Bosheng Song;Xiangxiang Zeng;Philip S. Yu
{"title":"Learning to Denoise Biomedical Knowledge Graph for Robust Molecular Interaction Prediction","authors":"Tengfei Ma;Yujie Chen;Wen Tao;Dashun Zheng;Xuan Lin;Patrick Cheong-Iao Pang;Yiping Liu;Yijun Wang;Longyue Wang;Bosheng Song;Xiangxiang Zeng;Philip S. Yu","doi":"10.1109/TKDE.2024.3471508","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3471508","url":null,"abstract":"Molecular interaction prediction plays a crucial role in forecasting unknown interactions between molecules, such as drug-target interaction (DTI) and drug-drug interaction (DDI), which are essential in the field of drug discovery and therapeutics. Although previous prediction methods have yielded promising results by leveraging the rich semantics and topological structure of biomedical knowledge graphs (KGs), they have primarily focused on enhancing predictive performance without addressing the presence of inevitable noise and inconsistent semantics. This limitation has hindered the advancement of KG-based prediction methods. To address this limitation, we propose BioKDN (\u0000<bold>Bio</b>\u0000medical \u0000<bold>K</b>\u0000nowledge Graph \u0000<bold>D</b>\u0000enoising \u0000<bold>N</b>\u0000etwork) for robust molecular interaction prediction. BioKDN refines the reliable structure of local subgraphs by denoising noisy links in a learnable manner, providing a general module for extracting task-relevant interactions. To enhance the reliability of the refined structure, BioKDN maintains consistent and robust semantics by smoothing relations around the target interaction. By maximizing the mutual information between reliable structure and smoothed relations, BioKDN emphasizes informative semantics to enable precise predictions. Experimental results on real-world datasets show that BioKDN surpasses state-of-the-art models in DTI and DDI prediction tasks, confirming the effectiveness and robustness of BioKDN in denoising unreliable interactions within contaminated KGs.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8682-8694"},"PeriodicalIF":8.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison Queries Generation Using Mathematical Programming for Exploratory Data Analysis 使用数学编程生成比较查询,用于探索性数据分析
IF 8.9 2区 计算机科学
IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-10-04 DOI: 10.1109/TKDE.2024.3474828
Alexandre Chanson;Nicolas Labroche;Patrick Marcel;Vincent T'Kindt
{"title":"Comparison Queries Generation Using Mathematical Programming for Exploratory Data Analysis","authors":"Alexandre Chanson;Nicolas Labroche;Patrick Marcel;Vincent T'Kindt","doi":"10.1109/TKDE.2024.3474828","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3474828","url":null,"abstract":"Exploratory Data Analysis (EDA) is the interactive process of gaining insights from a dataset. Comparisons are popular insights that can be specified with comparison queries, i.e., specifications of the comparison of subsets of data. In this work, we consider the problem of automatically computing sequences of comparison queries that are coherent, significant and whose overall cost is bounded. Such an automation is usually done by either generating all insights and solving a multi-criteria optimization problem, or using reinforcement learning. In the first case, a large search space has to be explored using exponential algorithms or dedicated heuristics. In the second case, a dataset-specific, time and energy-consuming training, is necessary. We contribute with a novel approach, consisting of decomposing the optimization problem in two: the original problem, that is solved over a smaller search space, and a new problem of generating comparison queries, aiming at generating only queries improving existing solutions of the first problem. This allows to explore only a portion of the search space, without resorting to reinforcement learning. We show that this approach is effective, in that it finds good solutions to the original multi-criteria optimization problem, and efficient, allowing to generate sequences of comparisons in reasonable time.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"7792-7804"},"PeriodicalIF":8.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and Effective Augmentation Framework With Latent Mixup and Label-Guided Contrastive Learning for Graph Classification 利用潜在混合和标签引导对比学习实现图分类的高效增强框架
IF 8.9 2区 计算机科学
IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-09-30 DOI: 10.1109/TKDE.2024.3471659
Aoting Zeng;Liping Wang;Wenjie Zhang;Xuemin Lin
{"title":"Efficient and Effective Augmentation Framework With Latent Mixup and Label-Guided Contrastive Learning for Graph Classification","authors":"Aoting Zeng;Liping Wang;Wenjie Zhang;Xuemin Lin","doi":"10.1109/TKDE.2024.3471659","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3471659","url":null,"abstract":"Graph Neural Networks (GNNs) with data augmentation obtain promising results among existing solutions for graph classification. Mixup-based augmentation methods for graph classification have already achieved state-of-the-art performance. However, existing mixup-based augmentation methods either operate in the input space and thus face the challenge of balancing efficiency and accuracy, or directly conduct mixup in the latent space without similarity guarantee, thus leading to lacking semantic validity and limited performance. To address these limitations, this paper proposes \u0000<inline-formula><tex-math>$mathcal {G}$</tex-math></inline-formula>\u0000-MixCon, a novel framework leveraging the strengths of \u0000<i><u>Mix</u></i>\u0000up-based augmentation and supervised \u0000<i><u>Con</u></i>\u0000trastive learning (SCL). To the best of our knowledge, this is the first attempt to develop an SCL-based approach for learning graph representations. Specifically, the mixup-based strategy within the latent space named \u0000<inline-formula><tex-math>$GDA_{gl}$</tex-math></inline-formula>\u0000 and \u0000<inline-formula><tex-math>$GDA_{nl}$</tex-math></inline-formula>\u0000 are proposed, which efficiently conduct linear interpolation between views of the node or graph level. Furthermore, we design a dual-objective loss function named \u0000<i>SupMixCon</i>\u0000 that can consider both the consistency among graphs and the distances between the original and augmented graph. \u0000<i>SupMixCon</i>\u0000 can guide the training process for SCL in \u0000<inline-formula><tex-math>$mathcal {G}$</tex-math></inline-formula>\u0000-MixCon while achieving a similarity guarantee. Comprehensive experiments are conducted on various real-world datasets, the results show that \u0000<inline-formula><tex-math>$mathcal {G}$</tex-math></inline-formula>\u0000-MixCon demonstrably enhances performance, achieving an average accuracy increment of 6.24%, and significantly increases the robustness of GNNs against noisy labels.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8066-8078"},"PeriodicalIF":8.9,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large Language Models on Graphs: A Comprehensive Survey 图上的大型语言模型:全面调查
IF 8.9 2区 计算机科学
IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-09-27 DOI: 10.1109/TKDE.2024.3469578
Bowen Jin;Gang Liu;Chi Han;Meng Jiang;Heng Ji;Jiawei Han
{"title":"Large Language Models on Graphs: A Comprehensive Survey","authors":"Bowen Jin;Gang Liu;Chi Han;Meng Jiang;Heng Ji;Jiawei Han","doi":"10.1109/TKDE.2024.3469578","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3469578","url":null,"abstract":"Large language models (LLMs), such as GPT4 and LLaMA, are creating significant advancements in natural language processing, due to their strong text encoding/decoding ability and newly found emergent capability (e.g., reasoning). While LLMs are mainly designed to process pure texts, there are many real-world scenarios where text data is associated with rich structure information in the form of graphs (e.g., academic networks, and e-commerce networks) or scenarios where graph data is paired with rich textual information (e.g., molecules with descriptions). Besides, although LLMs have shown their pure text-based reasoning ability, it is underexplored whether such ability can be generalized to graphs (i.e., graph-based reasoning). In this paper, we provide a systematic review of scenarios and techniques related to large language models on graphs. We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs. We then discuss detailed techniques for utilizing LLMs on graphs, including LLM as Predictor, LLM as Encoder, and LLM as Aligner, and compare the advantages and disadvantages of different schools of models. Furthermore, we discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets. Finally, we conclude with potential future research directions in this fast-growing field.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8622-8642"},"PeriodicalIF":8.9,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective Generalized Low-Rank Tensor Contextual Bandits 有效的广义低张量语境匪帮
IF 8.9 2区 计算机科学
IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-09-27 DOI: 10.1109/TKDE.2024.3469782
Qianxin Yi;Yiyang Yang;Shaojie Tang;Jiapeng Liu;Yao Wang
{"title":"Effective Generalized Low-Rank Tensor Contextual Bandits","authors":"Qianxin Yi;Yiyang Yang;Shaojie Tang;Jiapeng Liu;Yao Wang","doi":"10.1109/TKDE.2024.3469782","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3469782","url":null,"abstract":"In this paper, we aim to build a novel bandits algorithm that is capable of fully harnessing the power of multi-dimensional data and the inherent non-linearity of reward functions to provide high-usable and accountable decision-making services. To this end, we introduce a generalized low-rank tensor contextual bandits model in which an action is formed from three feature vectors, and thus is represented by a tensor. In this formulation, the reward is determined through a generalized linear function applied to the inner product of the action’s feature tensor and a fixed but unknown parameter tensor with low-rank structure. To effectively achieve the trade-off between exploration and exploitation, we introduce an algorithm called “Generalized Low-Rank Tensor Exploration Subspace then Refine” (G-LowTESTR). This algorithm first collects data to explore the intrinsic low-rank tensor subspace information embedded in the scenario, and then converts the original problem into a lower-dimensional generalized linear contextual bandits problem. Rigorous theoretical analysis shows that the regret bound of G-LowTESTR is superior to those in vectorization and matricization cases. We conduct a series of synthetic and real data experiments to further highlight the effectiveness of G-LowTESTR, leveraging its ability to capitalize on the low-rank tensor structure for enhanced learning.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8051-8065"},"PeriodicalIF":8.9,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Level Graph Knowledge Contrastive Learning 多层次图式知识对比学习
IF 8.9 2区 计算机科学
IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-09-26 DOI: 10.1109/TKDE.2024.3466530
Haoran Yang;Yuhao Wang;Xiangyu Zhao;Hongxu Chen;Hongzhi Yin;Qing Li;Guandong Xu
{"title":"Multi-Level Graph Knowledge Contrastive Learning","authors":"Haoran Yang;Yuhao Wang;Xiangyu Zhao;Hongxu Chen;Hongzhi Yin;Qing Li;Guandong Xu","doi":"10.1109/TKDE.2024.3466530","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3466530","url":null,"abstract":"Graph Contrastive Learning (GCL) stands as a potent framework for unsupervised graph representation learning that has gained traction across numerous graph learning applications. The effectiveness of GCL relies on generating high-quality contrasting samples, enhancing the model’s ability to discern graph semantics. However, the prevailing GCL methods face two key challenges: 1) introducing noise during graph augmentations and 2) requiring additional storage for generated samples, which degrade the model performance. In this paper, we propose novel approaches, GKCL (i.e., Graph Knowledge Contrastive Learning) and DGKCL (i.e., Distilled Graph Knowledge Contrastive Learning), that leverage multi-level graph knowledge to create noise-free contrasting pairs. This framework not only addresses the noise-related challenges but also circumvents excessive storage demands. Furthermore, our method incorporates a knowledge distillation component to optimize the trained embedding tables, reducing the model’s scale while ensuring superior performance, particularly for the scenarios with smaller embedding sizes. Comprehensive experimental evaluations on three public benchmark datasets underscore the merits of our proposed method and elucidate its properties, which primarily reflect the performance of the proposed method equipped with different embedding sizes and how the distillation weight affects the overall performance.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8829-8841"},"PeriodicalIF":8.9,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications 图神经网络隐私调查:攻击、保护与应用
IF 8.9 2区 计算机科学
IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-09-24 DOI: 10.1109/TKDE.2024.3454328
Yi Zhang;Yuying Zhao;Zhaoqing Li;Xueqi Cheng;Yu Wang;Olivera Kotevska;Philip S. Yu;Tyler Derr
{"title":"A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications","authors":"Yi Zhang;Yuying Zhao;Zhaoqing Li;Xueqi Cheng;Yu Wang;Olivera Kotevska;Philip S. Yu;Tyler Derr","doi":"10.1109/TKDE.2024.3454328","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3454328","url":null,"abstract":"Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data and the improvement in practical applications. However, many of these models prioritize high utility performance, such as accuracy, with a lack of privacy consideration, which is a major concern in modern society where privacy attacks are rampant. To address this issue, researchers have started to develop privacy-preserving GNNs. Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain. In this survey, we aim to address this gap by summarizing the attacks on graph data according to the targeted information, categorizing the privacy preservation techniques in GNNs, and reviewing the datasets and applications that could be used for analyzing/solving privacy issues in GNNs. We also outline potential directions for future research in order to build better privacy-preserving GNNs.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"7497-7515"},"PeriodicalIF":8.9,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10693287","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion-Based Graph Generative Methods 基于扩散的图形生成方法
IF 8.9 2区 计算机科学
IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-09-23 DOI: 10.1109/TKDE.2024.3466301
Hongyang Chen;Can Xu;Lingyu Zheng;Qiang Zhang;Xuemin Lin
{"title":"Diffusion-Based Graph Generative Methods","authors":"Hongyang Chen;Can Xu;Lingyu Zheng;Qiang Zhang;Xuemin Lin","doi":"10.1109/TKDE.2024.3466301","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3466301","url":null,"abstract":"Being the most cutting-edge generative methods, diffusion methods have shown great advances in wide generation tasks. Among them, graph generation attracts significant research attention for its broad application in real life. In our survey, we systematically and comprehensively review on diffusion-based graph generative methods. We first make a review on three mainstream paradigms of diffusion methods, which are denoising diffusion probabilistic models, score-based genrative models, and stochastic differential equations. Then we further categorize and introduce the latest applications of diffusion models on graphs. In the end, we point out some limitations of current studies and future directions of future explorations.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"7954-7972"},"PeriodicalIF":8.9,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ReiPool: Reinforced Pooling Graph Neural Networks for Graph-Level Representation Learning ReiPool:用于图层表征学习的强化池化图神经网络
IF 8.9 2区 计算机科学
IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-09-23 DOI: 10.1109/TKDE.2024.3466508
Xuexiong Luo;Sheng Zhang;Jia Wu;Hongyang Chen;Hao Peng;Chuan Zhou;Zhao Li;Shan Xue;Jian Yang
{"title":"ReiPool: Reinforced Pooling Graph Neural Networks for Graph-Level Representation Learning","authors":"Xuexiong Luo;Sheng Zhang;Jia Wu;Hongyang Chen;Hao Peng;Chuan Zhou;Zhao Li;Shan Xue;Jian Yang","doi":"10.1109/TKDE.2024.3466508","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3466508","url":null,"abstract":"Graph pooling technique as the essential component of graph neural networks has gotten increasing attention recently and it aims to learn graph-level representations for the whole graph. Besides, graph pooling is important in graph classification and graph generation tasks. However, current graph pooling methods mainly coarsen a sequence of small-sized graphs to capture hierarchical structures, potentially resulting in the deterioration of the global structure of the original graph and influencing the quality of graph representations. Furthermore, these methods artificially select the number of graph pooling layers for different graph datasets rather than considering each graph individually. In reality, the structure and size differences among graphs necessitate a specific number of graph pooling layers for each graph. In this work, we propose reinforced pooling graph neural networks via adaptive hybrid graph coarsening networks. Specifically, we design a hybrid graph coarsening strategy to coarsen redundant structures of the original graph while retaining the global structure. In addition, we introduce multi-agent reinforcement learning to adaptively perform the graph coarsening process to extract the most representative coarsened graph for each graph, enhancing the quality of graph-level representations. Finally, we design graph-level contrast to improve the preservation of global information in graph-level representations. Extensive experiments with rich baselines on six benchmark datasets show the effectiveness of ReiPool\u0000<sup>1</sup>\u0000.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"9109-9122"},"PeriodicalIF":8.9,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Domain Graph Level Anomaly Detection 跨域图层异常检测
IF 8.9 2区 计算机科学
IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-09-20 DOI: 10.1109/TKDE.2024.3462442
Zhong Li;Sheng Liang;Jiayang Shi;Matthijs van Leeuwen
{"title":"Cross-Domain Graph Level Anomaly Detection","authors":"Zhong Li;Sheng Liang;Jiayang Shi;Matthijs van Leeuwen","doi":"10.1109/TKDE.2024.3462442","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3462442","url":null,"abstract":"Existing graph level anomaly detection methods are predominantly unsupervised due to high costs for obtaining labels, yielding sub-optimal detection accuracy when compared to supervised methods. Moreover, they heavily rely on the assumption that the training data exclusively consists of normal graphs. Hence, even the presence of a few anomalous graphs can lead to substantial performance degradation. To alleviate these problems, we propose a \u0000<italic>cross-domain graph level anomaly detection method</i>\u0000, aiming to identify anomalous graphs from a set of unlabeled graphs (\u0000<italic>target domain</i>\u0000) by using easily accessible normal graphs from a different but related domain (\u0000<italic>source domain</i>\u0000). Our method consists of four components: a feature extractor that preserves semantic and topological information of individual graphs while incorporating the distance between different graphs; an adversarial domain classifier to make graph level representations domain-invariant; a one-class classifier to exploit label information in the source domain; and a class aligner to align classes from both domains based on pseudolabels. Experiments on seven benchmark datasets show that the proposed method largely outperforms state-of-the-art methods.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"7839-7850"},"PeriodicalIF":8.9,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10684507","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信