{"title":"ProtoMGAE:用于图形表征学习的原型感知掩码图形自动编码器","authors":"Yimei Zheng, Caiyan Jia","doi":"10.1145/3649143","DOIUrl":null,"url":null,"abstract":"<p>Graph self-supervised representation learning has gained considerable attention and demonstrated remarkable efficacy in extracting meaningful representations from graphs, particularly in the absence of labeled data. Two representative methods in this domain are graph auto-encoding and graph contrastive learning. However, the former methods primarily focus on global structures, potentially overlooking some fine-grained information during reconstruction. The latter methods emphasize node similarity across correlated views in the embedding space, potentially neglecting the inherent global graph information in the original input space. Moreover, handling incomplete graphs in real-world scenarios, where original features are unavailable for certain nodes, poses challenges for both types of methods. To alleviate these limitations, we integrate masked graph auto-encoding and prototype-aware graph contrastive learning into a unified model to learn node representations in graphs. In our method, we begin by masking a portion of node features and utilize a specific decoding strategy to reconstruct the masked information. This process facilitates the recovery of graphs from a global or macro level and enables handling incomplete graphs easily. Moreover, we treat the masked graph and the original one as a pair of contrasting views, enforcing the alignment and uniformity between their corresponding node representations at a local or micro level. Lastly, to capture cluster structures from a meso level and learn more discriminative representations, we introduce a prototype-aware clustering consistency loss that is jointly optimized with the above two complementary objectives. Extensive experiments conducted on several datasets demonstrate that the proposed method achieves significantly better or competitive performance on downstream tasks, especially for graph clustering, compared with the state-of-the-art methods, showcasing its superiority in enhancing graph representation learning.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"48 1","pages":""},"PeriodicalIF":4.0000,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ProtoMGAE: Prototype-aware Masked Graph Auto-Encoder for Graph Representation Learning\",\"authors\":\"Yimei Zheng, Caiyan Jia\",\"doi\":\"10.1145/3649143\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Graph self-supervised representation learning has gained considerable attention and demonstrated remarkable efficacy in extracting meaningful representations from graphs, particularly in the absence of labeled data. Two representative methods in this domain are graph auto-encoding and graph contrastive learning. However, the former methods primarily focus on global structures, potentially overlooking some fine-grained information during reconstruction. The latter methods emphasize node similarity across correlated views in the embedding space, potentially neglecting the inherent global graph information in the original input space. Moreover, handling incomplete graphs in real-world scenarios, where original features are unavailable for certain nodes, poses challenges for both types of methods. To alleviate these limitations, we integrate masked graph auto-encoding and prototype-aware graph contrastive learning into a unified model to learn node representations in graphs. In our method, we begin by masking a portion of node features and utilize a specific decoding strategy to reconstruct the masked information. This process facilitates the recovery of graphs from a global or macro level and enables handling incomplete graphs easily. Moreover, we treat the masked graph and the original one as a pair of contrasting views, enforcing the alignment and uniformity between their corresponding node representations at a local or micro level. Lastly, to capture cluster structures from a meso level and learn more discriminative representations, we introduce a prototype-aware clustering consistency loss that is jointly optimized with the above two complementary objectives. Extensive experiments conducted on several datasets demonstrate that the proposed method achieves significantly better or competitive performance on downstream tasks, especially for graph clustering, compared with the state-of-the-art methods, showcasing its superiority in enhancing graph representation learning.</p>\",\"PeriodicalId\":49249,\"journal\":{\"name\":\"ACM Transactions on Knowledge Discovery from Data\",\"volume\":\"48 1\",\"pages\":\"\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2024-02-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Knowledge Discovery from Data\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3649143\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Knowledge Discovery from Data","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3649143","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
ProtoMGAE: Prototype-aware Masked Graph Auto-Encoder for Graph Representation Learning
Graph self-supervised representation learning has gained considerable attention and demonstrated remarkable efficacy in extracting meaningful representations from graphs, particularly in the absence of labeled data. Two representative methods in this domain are graph auto-encoding and graph contrastive learning. However, the former methods primarily focus on global structures, potentially overlooking some fine-grained information during reconstruction. The latter methods emphasize node similarity across correlated views in the embedding space, potentially neglecting the inherent global graph information in the original input space. Moreover, handling incomplete graphs in real-world scenarios, where original features are unavailable for certain nodes, poses challenges for both types of methods. To alleviate these limitations, we integrate masked graph auto-encoding and prototype-aware graph contrastive learning into a unified model to learn node representations in graphs. In our method, we begin by masking a portion of node features and utilize a specific decoding strategy to reconstruct the masked information. This process facilitates the recovery of graphs from a global or macro level and enables handling incomplete graphs easily. Moreover, we treat the masked graph and the original one as a pair of contrasting views, enforcing the alignment and uniformity between their corresponding node representations at a local or micro level. Lastly, to capture cluster structures from a meso level and learn more discriminative representations, we introduce a prototype-aware clustering consistency loss that is jointly optimized with the above two complementary objectives. Extensive experiments conducted on several datasets demonstrate that the proposed method achieves significantly better or competitive performance on downstream tasks, especially for graph clustering, compared with the state-of-the-art methods, showcasing its superiority in enhancing graph representation learning.
期刊介绍:
TKDD welcomes papers on a full range of research in the knowledge discovery and analysis of diverse forms of data. Such subjects include, but are not limited to: scalable and effective algorithms for data mining and big data analysis, mining brain networks, mining data streams, mining multi-media data, mining high-dimensional data, mining text, Web, and semi-structured data, mining spatial and temporal data, data mining for community generation, social network analysis, and graph structured data, security and privacy issues in data mining, visual, interactive and online data mining, pre-processing and post-processing for data mining, robust and scalable statistical methods, data mining languages, foundations of data mining, KDD framework and process, and novel applications and infrastructures exploiting data mining technology including massively parallel processing and cloud computing platforms. TKDD encourages papers that explore the above subjects in the context of large distributed networks of computers, parallel or multiprocessing computers, or new data devices. TKDD also encourages papers that describe emerging data mining applications that cannot be satisfied by the current data mining technology.