{"title":"Improving ROUGE-1 by 6%: A novel multilingual transformer for abstractive news summarization","authors":"Sandeep Kumar, Arun Solanki","doi":"10.1002/cpe.8199","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Natural language processing (NLP) has undergone a significant transformation, evolving from manually crafted rules to powerful deep learning techniques such as transformers. These advancements have revolutionized various domains including summarization, question answering, and more. Statistical models like hidden Markov models (HMMs) and supervised learning have played crucial roles in laying the foundation for this progress. Recent breakthroughs in transfer learning and the emergence of large-scale models like BERT and GPT have further pushed the boundaries of NLP research. However, news summarization remains a challenging task in NLP, often resulting in factual inaccuracies or the loss of the article's essence. In this study, we propose a novel approach to news summarization utilizing a fine-tuned Transformer architecture pre-trained on Google's mt-small tokenizer. Our model demonstrates significant performance improvements over previous methods on the Inshorts English News dataset, achieving a 6% enhancement in the ROUGE-1 score and reducing training loss by 50%. This breakthrough facilitates the generation of reliable and concise news summaries, thereby enhancing information accessibility and user experience. Additionally, we conduct a comprehensive evaluation of our model's performance using popular metrics such as ROUGE scores, with our proposed model achieving ROUGE-1: 54.6130, ROUGE-2: 31.1543, ROUGE-L: 50.7709, and ROUGE-LSum: 50.7907. Furthermore, we observe a substantial reduction in training and validation losses, underscoring the effectiveness of our proposed approach.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 20","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.8199","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Natural language processing (NLP) has undergone a significant transformation, evolving from manually crafted rules to powerful deep learning techniques such as transformers. These advancements have revolutionized various domains including summarization, question answering, and more. Statistical models like hidden Markov models (HMMs) and supervised learning have played crucial roles in laying the foundation for this progress. Recent breakthroughs in transfer learning and the emergence of large-scale models like BERT and GPT have further pushed the boundaries of NLP research. However, news summarization remains a challenging task in NLP, often resulting in factual inaccuracies or the loss of the article's essence. In this study, we propose a novel approach to news summarization utilizing a fine-tuned Transformer architecture pre-trained on Google's mt-small tokenizer. Our model demonstrates significant performance improvements over previous methods on the Inshorts English News dataset, achieving a 6% enhancement in the ROUGE-1 score and reducing training loss by 50%. This breakthrough facilitates the generation of reliable and concise news summaries, thereby enhancing information accessibility and user experience. Additionally, we conduct a comprehensive evaluation of our model's performance using popular metrics such as ROUGE scores, with our proposed model achieving ROUGE-1: 54.6130, ROUGE-2: 31.1543, ROUGE-L: 50.7709, and ROUGE-LSum: 50.7907. Furthermore, we observe a substantial reduction in training and validation losses, underscoring the effectiveness of our proposed approach.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.