Multi-Level Knowledge Distillation with Positional Encoding Enhancement

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Lixiang Xu , Zhiwen Wang , Lu Bai , Shengwei Ji , Bing Ai , Xiaofeng Wang , Philip S. Yu
{"title":"Multi-Level Knowledge Distillation with Positional Encoding Enhancement","authors":"Lixiang Xu ,&nbsp;Zhiwen Wang ,&nbsp;Lu Bai ,&nbsp;Shengwei Ji ,&nbsp;Bing Ai ,&nbsp;Xiaofeng Wang ,&nbsp;Philip S. Yu","doi":"10.1016/j.patcog.2025.111458","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, Graph Neural Networks (GNNs) have achieved substantial success in addressing graph-related tasks. Knowledge Distillation (KD) has increasingly been adopted in graph learning as a classical technique for model compression and acceleration, enabling the transfer of predictive power from trained GNN models to lightweight, easily deployable Multi-Layer Perceptron (MLP) models. However, this approach often neglects node positional features and relies solely on trained GNN-generated labels to train MLPs based on node content features. Moreover, it heavily depends on local information aggregation, making it challenging to capture global graph structure and thereby limiting performance in node classification tasks. To address this issue, we propose <strong>M</strong>ulti-<strong>L</strong>evel <strong>K</strong>nowledge <strong>D</strong>istillation with <strong>P</strong>ositional <strong>E</strong>ncoding Enhancement <strong>(MLKD-PE)</strong>. Our method employs positional encoding technique to generate node positional features, which are then combined with node content features to enhance the MLP’s ability to perceive node positions. Additionally, we introduce a multi-level KD technique that aligns the final output of the student model with the teacher model’s output, facilitating detailed knowledge transfer by incorporating intermediate layer outputs from the teacher model. Experimental results demonstrate that our method significantly improves classification accuracy across multiple datasets compared to the baseline model, confirming its superiority in node classification tasks.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"163 ","pages":"Article 111458"},"PeriodicalIF":7.6000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325001189","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, Graph Neural Networks (GNNs) have achieved substantial success in addressing graph-related tasks. Knowledge Distillation (KD) has increasingly been adopted in graph learning as a classical technique for model compression and acceleration, enabling the transfer of predictive power from trained GNN models to lightweight, easily deployable Multi-Layer Perceptron (MLP) models. However, this approach often neglects node positional features and relies solely on trained GNN-generated labels to train MLPs based on node content features. Moreover, it heavily depends on local information aggregation, making it challenging to capture global graph structure and thereby limiting performance in node classification tasks. To address this issue, we propose Multi-Level Knowledge Distillation with Positional Encoding Enhancement (MLKD-PE). Our method employs positional encoding technique to generate node positional features, which are then combined with node content features to enhance the MLP’s ability to perceive node positions. Additionally, we introduce a multi-level KD technique that aligns the final output of the student model with the teacher model’s output, facilitating detailed knowledge transfer by incorporating intermediate layer outputs from the teacher model. Experimental results demonstrate that our method significantly improves classification accuracy across multiple datasets compared to the baseline model, confirming its superiority in node classification tasks.
基于位置编码增强的多层次知识蒸馏
近年来,图神经网络(gnn)在处理图相关任务方面取得了实质性的成功。知识蒸馏(KD)作为一种经典的模型压缩和加速技术越来越多地应用于图学习中,使预测能力从训练好的GNN模型转移到轻量级、易于部署的多层感知器(MLP)模型。然而,这种方法往往忽略了节点的位置特征,而仅仅依赖于训练好的gnn生成的标签来训练基于节点内容特征的mlp。此外,它严重依赖于局部信息聚合,这使得捕获全局图结构变得困难,从而限制了节点分类任务的性能。为了解决这个问题,我们提出了基于位置编码增强的多层次知识蒸馏(MLKD-PE)。该方法采用位置编码技术生成节点位置特征,并与节点内容特征相结合,增强MLP对节点位置的感知能力。此外,我们引入了一种多层次的KD技术,该技术将学生模型的最终输出与教师模型的输出保持一致,通过合并教师模型的中间层输出,促进了详细的知识转移。实验结果表明,与基线模型相比,我们的方法显著提高了多数据集的分类精度,证实了其在节点分类任务中的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信