{"title":"利用知识图谱社区对大型语言模型进行微调","authors":"Alessia Amelio , Christopher Buratti , Michele Marchetti , Davide Traini , Domenico Ursino , Luca Virgili","doi":"10.1016/j.eswa.2025.129816","DOIUrl":null,"url":null,"abstract":"<div><div>Since the introduction of GPT-2, Large Language Models (LLMs) have proven to be able to handle various tasks with impressive performance. However, they sometimes generate incorrect output or even hallucinations. To overcome this problem, many researchers have investigated the possibility of integrating external factual knowledge, such as that encoded in Knowledge Graphs (KGs), into LLMs. Although there are many approaches in the existing literature that integrate KGs and LLMs in different ways, few of them use KGs to fine-tune LLMs, and none of them systematically use KG substructures. In this paper, we propose CoFine (Community-Based Fine-Tuner), an approach to fine-tune an LLM using the communities of a KG. CoFine works as follows: it first divides the KG into communities, each of which contains a homogeneous portion of the knowledge expressed by the KG. It then uses these communities to fine-tune the LLM. This way of proceeding allows LLM fine-tuning to focus on specific homogeneous information contained in the KG expressed by each community. CoFine allows the LLM to achieve a very high accuracy in knowledge completion tasks. This is evidenced by comparisons between CoFine and a baseline LLM fine-tuning approach, which showed that our approach achieves better results for all metrics considered with several KG.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"298 ","pages":"Article 129816"},"PeriodicalIF":7.5000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploiting knowledge graph communities to fine-tune large language models\",\"authors\":\"Alessia Amelio , Christopher Buratti , Michele Marchetti , Davide Traini , Domenico Ursino , Luca Virgili\",\"doi\":\"10.1016/j.eswa.2025.129816\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Since the introduction of GPT-2, Large Language Models (LLMs) have proven to be able to handle various tasks with impressive performance. However, they sometimes generate incorrect output or even hallucinations. To overcome this problem, many researchers have investigated the possibility of integrating external factual knowledge, such as that encoded in Knowledge Graphs (KGs), into LLMs. Although there are many approaches in the existing literature that integrate KGs and LLMs in different ways, few of them use KGs to fine-tune LLMs, and none of them systematically use KG substructures. In this paper, we propose CoFine (Community-Based Fine-Tuner), an approach to fine-tune an LLM using the communities of a KG. CoFine works as follows: it first divides the KG into communities, each of which contains a homogeneous portion of the knowledge expressed by the KG. It then uses these communities to fine-tune the LLM. This way of proceeding allows LLM fine-tuning to focus on specific homogeneous information contained in the KG expressed by each community. CoFine allows the LLM to achieve a very high accuracy in knowledge completion tasks. This is evidenced by comparisons between CoFine and a baseline LLM fine-tuning approach, which showed that our approach achieves better results for all metrics considered with several KG.</div></div>\",\"PeriodicalId\":50461,\"journal\":{\"name\":\"Expert Systems with Applications\",\"volume\":\"298 \",\"pages\":\"Article 129816\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems with Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0957417425034311\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425034311","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Exploiting knowledge graph communities to fine-tune large language models
Since the introduction of GPT-2, Large Language Models (LLMs) have proven to be able to handle various tasks with impressive performance. However, they sometimes generate incorrect output or even hallucinations. To overcome this problem, many researchers have investigated the possibility of integrating external factual knowledge, such as that encoded in Knowledge Graphs (KGs), into LLMs. Although there are many approaches in the existing literature that integrate KGs and LLMs in different ways, few of them use KGs to fine-tune LLMs, and none of them systematically use KG substructures. In this paper, we propose CoFine (Community-Based Fine-Tuner), an approach to fine-tune an LLM using the communities of a KG. CoFine works as follows: it first divides the KG into communities, each of which contains a homogeneous portion of the knowledge expressed by the KG. It then uses these communities to fine-tune the LLM. This way of proceeding allows LLM fine-tuning to focus on specific homogeneous information contained in the KG expressed by each community. CoFine allows the LLM to achieve a very high accuracy in knowledge completion tasks. This is evidenced by comparisons between CoFine and a baseline LLM fine-tuning approach, which showed that our approach achieves better results for all metrics considered with several KG.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.