{"title":"FedGKD:用于保护隐私的谣言检测的联合图谱知识蒸馏","authors":"Peng Zheng, Yong Dou, Yeqing Yan","doi":"10.1016/j.knosys.2024.112476","DOIUrl":null,"url":null,"abstract":"<div><p>The massive spread of rumors on social networks has caused serious adverse effects on individuals and society, increasing the urgency of rumor detection. Existing detection methods based on deep learning have achieved fruitful results by virtue of their powerful semantic representation capabilities. However, the centralized training mode and the reliance on extensive training data containing user privacy pose significant risks of privacy abuse or leakage. Although federated learning with client-level differential privacy provides a potential solution, it results in a dramatic decline in model performance. To address these issues, we propose a Federated Graph Knowledge Distillation framework (FedGKD), which aims to effectively identify rumors while preserving user privacy. In this framework, we implement anonymization from both the feature and structure dimensions of graphs, and apply differential privacy only to sensitive features to prevent significant deviation in data statistics. Additionally, to improve model generalization performance in federated settings, we learn a lightweight generator at the server to extract global knowledge through knowledge distillation. This knowledge is then broadcast to clients as inductive experience to regulate their local training. Extensive experiments on four publicly available datasets demonstrate that FedGKD outperforms strong baselines and displays outstanding privacy-preserving capabilities.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedGKD: Federated Graph Knowledge Distillation for privacy-preserving rumor detection\",\"authors\":\"Peng Zheng, Yong Dou, Yeqing Yan\",\"doi\":\"10.1016/j.knosys.2024.112476\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The massive spread of rumors on social networks has caused serious adverse effects on individuals and society, increasing the urgency of rumor detection. Existing detection methods based on deep learning have achieved fruitful results by virtue of their powerful semantic representation capabilities. However, the centralized training mode and the reliance on extensive training data containing user privacy pose significant risks of privacy abuse or leakage. Although federated learning with client-level differential privacy provides a potential solution, it results in a dramatic decline in model performance. To address these issues, we propose a Federated Graph Knowledge Distillation framework (FedGKD), which aims to effectively identify rumors while preserving user privacy. In this framework, we implement anonymization from both the feature and structure dimensions of graphs, and apply differential privacy only to sensitive features to prevent significant deviation in data statistics. Additionally, to improve model generalization performance in federated settings, we learn a lightweight generator at the server to extract global knowledge through knowledge distillation. This knowledge is then broadcast to clients as inductive experience to regulate their local training. Extensive experiments on four publicly available datasets demonstrate that FedGKD outperforms strong baselines and displays outstanding privacy-preserving capabilities.</p></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705124011109\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705124011109","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
FedGKD: Federated Graph Knowledge Distillation for privacy-preserving rumor detection
The massive spread of rumors on social networks has caused serious adverse effects on individuals and society, increasing the urgency of rumor detection. Existing detection methods based on deep learning have achieved fruitful results by virtue of their powerful semantic representation capabilities. However, the centralized training mode and the reliance on extensive training data containing user privacy pose significant risks of privacy abuse or leakage. Although federated learning with client-level differential privacy provides a potential solution, it results in a dramatic decline in model performance. To address these issues, we propose a Federated Graph Knowledge Distillation framework (FedGKD), which aims to effectively identify rumors while preserving user privacy. In this framework, we implement anonymization from both the feature and structure dimensions of graphs, and apply differential privacy only to sensitive features to prevent significant deviation in data statistics. Additionally, to improve model generalization performance in federated settings, we learn a lightweight generator at the server to extract global knowledge through knowledge distillation. This knowledge is then broadcast to clients as inductive experience to regulate their local training. Extensive experiments on four publicly available datasets demonstrate that FedGKD outperforms strong baselines and displays outstanding privacy-preserving capabilities.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.