归纳图学习

Cheng-Long Wang, Mengdi Huai, Di Wang
{"title":"归纳图学习","authors":"Cheng-Long Wang, Mengdi Huai, Di Wang","doi":"10.48550/arXiv.2304.03093","DOIUrl":null,"url":null,"abstract":"As a way to implement the\"right to be forgotten\"in machine learning, \\textit{machine unlearning} aims to completely remove the contributions and information of the samples to be deleted from a trained model without affecting the contributions of other samples. Recently, many frameworks for machine unlearning have been proposed, and most of them focus on image and text data. To extend machine unlearning to graph data, \\textit{GraphEraser} has been proposed. However, a critical issue is that \\textit{GraphEraser} is specifically designed for the transductive graph setting, where the graph is static and attributes and edges of test nodes are visible during training. It is unsuitable for the inductive setting, where the graph could be dynamic and the test graph information is invisible in advance. Such inductive capability is essential for production machine learning systems with evolving graphs like social media and transaction networks. To fill this gap, we propose the \\underline{{\\bf G}}\\underline{{\\bf U}}ided \\underline{{\\bf I}}n\\underline{{\\bf D}}uctiv\\underline{{\\bf E}} Graph Unlearning framework (GUIDE). GUIDE consists of three components: guided graph partitioning with fairness and balance, efficient subgraph repair, and similarity-based aggregation. Empirically, we evaluate our method on several inductive benchmarks and evolving transaction graphs. Generally speaking, GUIDE can be efficiently implemented on the inductive graph learning tasks for its low graph partition cost, no matter on computation or structure information. The code will be available here: https://github.com/Happy2Git/GUIDE.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"20 1","pages":"3205-3222"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Inductive Graph Unlearning\",\"authors\":\"Cheng-Long Wang, Mengdi Huai, Di Wang\",\"doi\":\"10.48550/arXiv.2304.03093\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As a way to implement the\\\"right to be forgotten\\\"in machine learning, \\\\textit{machine unlearning} aims to completely remove the contributions and information of the samples to be deleted from a trained model without affecting the contributions of other samples. Recently, many frameworks for machine unlearning have been proposed, and most of them focus on image and text data. To extend machine unlearning to graph data, \\\\textit{GraphEraser} has been proposed. However, a critical issue is that \\\\textit{GraphEraser} is specifically designed for the transductive graph setting, where the graph is static and attributes and edges of test nodes are visible during training. It is unsuitable for the inductive setting, where the graph could be dynamic and the test graph information is invisible in advance. Such inductive capability is essential for production machine learning systems with evolving graphs like social media and transaction networks. To fill this gap, we propose the \\\\underline{{\\\\bf G}}\\\\underline{{\\\\bf U}}ided \\\\underline{{\\\\bf I}}n\\\\underline{{\\\\bf D}}uctiv\\\\underline{{\\\\bf E}} Graph Unlearning framework (GUIDE). GUIDE consists of three components: guided graph partitioning with fairness and balance, efficient subgraph repair, and similarity-based aggregation. Empirically, we evaluate our method on several inductive benchmarks and evolving transaction graphs. Generally speaking, GUIDE can be efficiently implemented on the inductive graph learning tasks for its low graph partition cost, no matter on computation or structure information. The code will be available here: https://github.com/Happy2Git/GUIDE.\",\"PeriodicalId\":91597,\"journal\":{\"name\":\"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium\",\"volume\":\"20 1\",\"pages\":\"3205-3222\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2304.03093\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2304.03093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

\textit{机器学习}是机器学习中实现“被遗忘权”的一种方式,其目的是在不影响其他样本贡献的情况下,将待删除样本的贡献和信息从训练好的模型中完全移除。近年来,人们提出了许多机器学习框架,但大多数框架都集中在图像和文本数据上。为了将机器学习扩展到图形数据,提出了\textit{刻字机}。然而,一个关键的问题是\textit{刻字机}是专门为换能图设置设计的,其中图是静态的,测试节点的属性和边缘在训练期间是可见的。不适用于感应式设置,因为感应式设置的图形可能是动态的,且测试图形信息事先不可见。这种归纳能力对于具有进化图的生产机器学习系统(如社交媒体和交易网络)至关重要。为了填补这一空白,我们提出了\underline{{\bf g}}\underline{{\bf 你}} ide \underline{{\bf 在}}\underline{{\bf 延展性}}\underline{{\bf e}}图学习框架(GUIDE)。GUIDE由三个部分组成:具有公平性和平衡性的引导图分区、高效的子图修复和基于相似度的聚合。在经验上,我们在几个归纳基准和演进的事务图上评估了我们的方法。一般来说,无论是计算量还是结构信息,GUIDE都可以有效地实现在归纳图学习任务上。代码可以在这里获得:https://github.com/Happy2Git/GUIDE。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Inductive Graph Unlearning
As a way to implement the"right to be forgotten"in machine learning, \textit{machine unlearning} aims to completely remove the contributions and information of the samples to be deleted from a trained model without affecting the contributions of other samples. Recently, many frameworks for machine unlearning have been proposed, and most of them focus on image and text data. To extend machine unlearning to graph data, \textit{GraphEraser} has been proposed. However, a critical issue is that \textit{GraphEraser} is specifically designed for the transductive graph setting, where the graph is static and attributes and edges of test nodes are visible during training. It is unsuitable for the inductive setting, where the graph could be dynamic and the test graph information is invisible in advance. Such inductive capability is essential for production machine learning systems with evolving graphs like social media and transaction networks. To fill this gap, we propose the \underline{{\bf G}}\underline{{\bf U}}ided \underline{{\bf I}}n\underline{{\bf D}}uctiv\underline{{\bf E}} Graph Unlearning framework (GUIDE). GUIDE consists of three components: guided graph partitioning with fairness and balance, efficient subgraph repair, and similarity-based aggregation. Empirically, we evaluate our method on several inductive benchmarks and evolving transaction graphs. Generally speaking, GUIDE can be efficiently implemented on the inductive graph learning tasks for its low graph partition cost, no matter on computation or structure information. The code will be available here: https://github.com/Happy2Git/GUIDE.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信