{"title":"A hybrid model for extractive summarization: Leveraging graph entropy to improve large language model performance","authors":"Taner UÇKAN","doi":"10.1016/j.asej.2025.103348","DOIUrl":null,"url":null,"abstract":"<div><div>Extractive text summarization models focus on condensing large texts by selecting key sentences rather than generating new ones. Recently, studies have utilized large language models (LLMs) for effective summarization solutions. However, limitations like cost and time in using LLMs make achieving high performance challenging. This study introduces a hybrid model that combines graph entropy with LLMs to improve summarization accuracy and time efficiency. Initially, the text is represented as a graph, with each sentence as a node. Using Karci Entropy (KE) to measure each sentence’s information, the model selects the most valuable sentences, which are then processed by LLMs like BERT, RoBERTa, and XLNet, to create summaries of 400 words, 200 words, and 3 sentences. Testing on Duc2002 and CNN Daily datasets shows significant gains in both accuracy and processing speed, highlighting the proposed model’s effectiveness.</div></div>","PeriodicalId":48648,"journal":{"name":"Ain Shams Engineering Journal","volume":"16 5","pages":"Article 103348"},"PeriodicalIF":6.0000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ain Shams Engineering Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2090447925000899","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Extractive text summarization models focus on condensing large texts by selecting key sentences rather than generating new ones. Recently, studies have utilized large language models (LLMs) for effective summarization solutions. However, limitations like cost and time in using LLMs make achieving high performance challenging. This study introduces a hybrid model that combines graph entropy with LLMs to improve summarization accuracy and time efficiency. Initially, the text is represented as a graph, with each sentence as a node. Using Karci Entropy (KE) to measure each sentence’s information, the model selects the most valuable sentences, which are then processed by LLMs like BERT, RoBERTa, and XLNet, to create summaries of 400 words, 200 words, and 3 sentences. Testing on Duc2002 and CNN Daily datasets shows significant gains in both accuracy and processing speed, highlighting the proposed model’s effectiveness.
期刊介绍:
in Shams Engineering Journal is an international journal devoted to publication of peer reviewed original high-quality research papers and review papers in both traditional topics and those of emerging science and technology. Areas of both theoretical and fundamental interest as well as those concerning industrial applications, emerging instrumental techniques and those which have some practical application to an aspect of human endeavor, such as the preservation of the environment, health, waste disposal are welcome. The overall focus is on original and rigorous scientific research results which have generic significance.
Ain Shams Engineering Journal focuses upon aspects of mechanical engineering, electrical engineering, civil engineering, chemical engineering, petroleum engineering, environmental engineering, architectural and urban planning engineering. Papers in which knowledge from other disciplines is integrated with engineering are especially welcome like nanotechnology, material sciences, and computational methods as well as applied basic sciences: engineering mathematics, physics and chemistry.