一种新的文本摘要的无监督微调方法,突出了ROUGE评分的局限性

Ala Alam Falaki, Robin Gras
{"title":"一种新的文本摘要的无监督微调方法,突出了ROUGE评分的局限性","authors":"Ala Alam Falaki,&nbsp;Robin Gras","doi":"10.1016/j.mlwa.2025.100666","DOIUrl":null,"url":null,"abstract":"<div><div>The limited availability of datasets for text summarization tasks and their similar characteristics (e.g. news articles) make it crucial to focus on unsupervised learning techniques to enable summarization across different domains. Moreover, since summarization produces text output, effective methods developed for news articles can be applied to other domains lacking sufficient labeled data. This study introduces a novel target selection process to be used as an unsupervised learning method for fine-tuning text summarization models with unlabeled data. The process involves two-steps: first, generating an extractive summary (Ext-Reference) from the article, and second, using an abstractive model to create a pool of candidate summaries. The most suitable summary (to be used as the target) is then selected by calculating the cosine similarity between the Ext-Reference’s embedding and each candidate’s embedding. Furthermore, this project underscores the limitations of the ROUGE score, which assigns a relatively low score to this method. However, extended analysis with various metrics, including using GPT-4 as a judge, demonstrates the effectiveness of this technique for fine-tuning models without a specific target reference. It highlights the importance of using a combination of metrics, like those included in the SumEvaluator package released alongside this paper. SumEvaluator package on Github: <span><span>https://github.com/AlaFalaki/SumEvaluator</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"20 ","pages":"Article 100666"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A novel unsupervised fine-tuning method for text summarization, and highlighting the limitations of ROUGE score\",\"authors\":\"Ala Alam Falaki,&nbsp;Robin Gras\",\"doi\":\"10.1016/j.mlwa.2025.100666\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The limited availability of datasets for text summarization tasks and their similar characteristics (e.g. news articles) make it crucial to focus on unsupervised learning techniques to enable summarization across different domains. Moreover, since summarization produces text output, effective methods developed for news articles can be applied to other domains lacking sufficient labeled data. This study introduces a novel target selection process to be used as an unsupervised learning method for fine-tuning text summarization models with unlabeled data. The process involves two-steps: first, generating an extractive summary (Ext-Reference) from the article, and second, using an abstractive model to create a pool of candidate summaries. The most suitable summary (to be used as the target) is then selected by calculating the cosine similarity between the Ext-Reference’s embedding and each candidate’s embedding. Furthermore, this project underscores the limitations of the ROUGE score, which assigns a relatively low score to this method. However, extended analysis with various metrics, including using GPT-4 as a judge, demonstrates the effectiveness of this technique for fine-tuning models without a specific target reference. It highlights the importance of using a combination of metrics, like those included in the SumEvaluator package released alongside this paper. SumEvaluator package on Github: <span><span>https://github.com/AlaFalaki/SumEvaluator</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":74093,\"journal\":{\"name\":\"Machine learning with applications\",\"volume\":\"20 \",\"pages\":\"Article 100666\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine learning with applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666827025000490\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning with applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666827025000490","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

用于文本摘要任务的数据集的有限可用性及其相似的特征(例如新闻文章)使得关注无监督学习技术以实现跨不同领域的摘要变得至关重要。此外,由于摘要产生文本输出,为新闻文章开发的有效方法可以应用于缺乏足够标记数据的其他领域。本研究引入了一种新的目标选择过程,作为一种无监督学习方法,用于对未标记数据的文本摘要模型进行微调。该过程包括两个步骤:首先,从文章中生成提取摘要(Ext-Reference),其次,使用抽象模型创建候选摘要池。然后通过计算Ext-Reference的嵌入和每个候选嵌入之间的余弦相似度来选择最合适的摘要(用作目标)。此外,这个项目强调了ROUGE分数的局限性,它给这个方法分配了一个相对较低的分数。然而,对各种指标的扩展分析,包括使用GPT-4作为判断,证明了该技术在没有特定目标参考的情况下微调模型的有效性。它强调了使用指标组合的重要性,就像与本文一起发布的SumEvaluator包中包含的那些。SumEvaluator包在Github上:https://github.com/AlaFalaki/SumEvaluator。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A novel unsupervised fine-tuning method for text summarization, and highlighting the limitations of ROUGE score
The limited availability of datasets for text summarization tasks and their similar characteristics (e.g. news articles) make it crucial to focus on unsupervised learning techniques to enable summarization across different domains. Moreover, since summarization produces text output, effective methods developed for news articles can be applied to other domains lacking sufficient labeled data. This study introduces a novel target selection process to be used as an unsupervised learning method for fine-tuning text summarization models with unlabeled data. The process involves two-steps: first, generating an extractive summary (Ext-Reference) from the article, and second, using an abstractive model to create a pool of candidate summaries. The most suitable summary (to be used as the target) is then selected by calculating the cosine similarity between the Ext-Reference’s embedding and each candidate’s embedding. Furthermore, this project underscores the limitations of the ROUGE score, which assigns a relatively low score to this method. However, extended analysis with various metrics, including using GPT-4 as a judge, demonstrates the effectiveness of this technique for fine-tuning models without a specific target reference. It highlights the importance of using a combination of metrics, like those included in the SumEvaluator package released alongside this paper. SumEvaluator package on Github: https://github.com/AlaFalaki/SumEvaluator.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Machine learning with applications
Machine learning with applications Management Science and Operations Research, Artificial Intelligence, Computer Science Applications
自引率
0.00%
发文量
0
审稿时长
98 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信