{"title":"Multilevel Context Learning with Large Language Models for Text-Attributed Graphs on Social Networks.","authors":"Xiaokang Cai, Ruoyuan Gong, Hao Jiang","doi":"10.3390/e27030286","DOIUrl":null,"url":null,"abstract":"<p><p>There are complex graph structures and rich textual information on social networks. Text provides important information for various tasks, while graph structures offer multilevel context for the semantics of the text. Contemporary researchers tend to represent these kinds of data by text-attributed graphs (TAGs). Most TAG-based representation learning methods focus on designing frameworks that convey graph structures to large language models (LLMs) to generate semantic embeddings for downstream graph neural networks (GNNs). However, these methods only provide text attributes for nodes, which fails to capture the multilevel context and leads to the loss of valuable information. To tackle this issue, we introduce the Multilevel Context Learner (MCL) model, which leverages multilevel context on social networks to enhance LLMs' semantic embedding capabilities. We model the social network as a multilevel context textual-edge graph (MC-TEG), effectively capturing both graph structure and semantic relationships. Our MCL model leverages the reasoning capabilities of LLMs to generate semantic embeddings by integrating these multilevel contexts. The tailored bidirectional dynamic graph attention layers are introduced to further distinguish the weight information. Experimental evaluations on six real social network datasets show that the MCL model consistently outperforms all baseline models. Specifically, the MCL model achieves prediction accuracies of 77.98%, 77.63%, 74.61%, 76.40%, 72.89%, and 73.40%, with absolute improvements of 9.04%, 9.19%, 11.05%, 7.24%, 6.11%, and 9.87% over the next best models. These results demonstrate the effectiveness of the proposed MCL model.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"27 3","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11940941/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Entropy","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.3390/e27030286","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
There are complex graph structures and rich textual information on social networks. Text provides important information for various tasks, while graph structures offer multilevel context for the semantics of the text. Contemporary researchers tend to represent these kinds of data by text-attributed graphs (TAGs). Most TAG-based representation learning methods focus on designing frameworks that convey graph structures to large language models (LLMs) to generate semantic embeddings for downstream graph neural networks (GNNs). However, these methods only provide text attributes for nodes, which fails to capture the multilevel context and leads to the loss of valuable information. To tackle this issue, we introduce the Multilevel Context Learner (MCL) model, which leverages multilevel context on social networks to enhance LLMs' semantic embedding capabilities. We model the social network as a multilevel context textual-edge graph (MC-TEG), effectively capturing both graph structure and semantic relationships. Our MCL model leverages the reasoning capabilities of LLMs to generate semantic embeddings by integrating these multilevel contexts. The tailored bidirectional dynamic graph attention layers are introduced to further distinguish the weight information. Experimental evaluations on six real social network datasets show that the MCL model consistently outperforms all baseline models. Specifically, the MCL model achieves prediction accuracies of 77.98%, 77.63%, 74.61%, 76.40%, 72.89%, and 73.40%, with absolute improvements of 9.04%, 9.19%, 11.05%, 7.24%, 6.11%, and 9.87% over the next best models. These results demonstrate the effectiveness of the proposed MCL model.
期刊介绍:
Entropy (ISSN 1099-4300), an international and interdisciplinary journal of entropy and information studies, publishes reviews, regular research papers and short notes. Our aim is to encourage scientists to publish as much as possible their theoretical and experimental details. There is no restriction on the length of the papers. If there are computation and the experiment, the details must be provided so that the results can be reproduced.