{"title":"BD2TSumm: A Benchmark Dataset for Abstractive Disaster Tweet Summarization","authors":"Piyush Kumar Garg , Roshni Chakraborty , Sourav Kumar Dandapat","doi":"10.1016/j.osnem.2024.100299","DOIUrl":null,"url":null,"abstract":"<div><div>Online social media platforms, such as Twitter, are mediums for valuable updates during disasters. However, the large scale of available information makes it difficult for humans to identify relevant information from the available information. An automatic summary of these tweets provides identification of relevant information easy and ensures a holistic overview of a disaster event to process the aid for disaster response. In literature, there are two types of abstractive disaster tweet summarization approaches based on the format of output summary: key-phrased-based (where summary is a set of key-phrases) and sentence-based (where summary is a paragraph consisting of sentences). Existing sentence-based abstractive approaches are either unsupervised or supervised. However, both types of approaches require a sizable amount of ground-truth summaries for training and/or evaluation such that they work on disaster events irrespective of type and location. The lack of abstractive disaster ground-truth summaries and guidelines for annotation motivates us to come up with a systematic procedure to create abstractive sentence ground-truth summaries of disaster events. Therefore, this paper presents a two-step systematic annotation procedure for sentence-based abstractive summary creation. Additionally, we release <em>BD2TSumm</em>, i.e., a benchmark ground-truth dataset for evaluating the sentence-based abstractive summarization approaches for disaster events. <em>BD2TSumm</em> consists of 15 ground-truth summaries belonging to 5 different continents and both natural and man-made disaster types. Furthermore, to ensure the high quality of the generated ground-truth summaries, we evaluate them qualitatively (using five metrics) and quantitatively (using two metrics). Finally, we compare 12 existing State-Of-The-Art (SOTA) abstractive summarization approaches on these ground-truth summaries using ROUGE-N F1-score.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"45 ","pages":"Article 100299"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Social Networks and Media","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468696424000247","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
Online social media platforms, such as Twitter, are mediums for valuable updates during disasters. However, the large scale of available information makes it difficult for humans to identify relevant information from the available information. An automatic summary of these tweets provides identification of relevant information easy and ensures a holistic overview of a disaster event to process the aid for disaster response. In literature, there are two types of abstractive disaster tweet summarization approaches based on the format of output summary: key-phrased-based (where summary is a set of key-phrases) and sentence-based (where summary is a paragraph consisting of sentences). Existing sentence-based abstractive approaches are either unsupervised or supervised. However, both types of approaches require a sizable amount of ground-truth summaries for training and/or evaluation such that they work on disaster events irrespective of type and location. The lack of abstractive disaster ground-truth summaries and guidelines for annotation motivates us to come up with a systematic procedure to create abstractive sentence ground-truth summaries of disaster events. Therefore, this paper presents a two-step systematic annotation procedure for sentence-based abstractive summary creation. Additionally, we release BD2TSumm, i.e., a benchmark ground-truth dataset for evaluating the sentence-based abstractive summarization approaches for disaster events. BD2TSumm consists of 15 ground-truth summaries belonging to 5 different continents and both natural and man-made disaster types. Furthermore, to ensure the high quality of the generated ground-truth summaries, we evaluate them qualitatively (using five metrics) and quantitatively (using two metrics). Finally, we compare 12 existing State-Of-The-Art (SOTA) abstractive summarization approaches on these ground-truth summaries using ROUGE-N F1-score.