Xingxing Ding, Ruo Wang, Zhong Zheng, Xuan Liu, Quan Zhu, Ruiqun Li, Wanru Du, Siyuan Shen
{"title":"DoS:基于文档共享预训练模型的抽象文本摘要","authors":"Xingxing Ding, Ruo Wang, Zhong Zheng, Xuan Liu, Quan Zhu, Ruiqun Li, Wanru Du, Siyuan Shen","doi":"10.1109/IIP57348.2022.00040","DOIUrl":null,"url":null,"abstract":"In this paper, an abstractive text summarization method with document sharing is proposed. It consists of a pretrained model and self-attention mechanism on multi-document. We call it DoS mechanism. By applying the mechanism to the single-document text summarization task, the model can absorb information from multiple documents, thus enhancing its effectiveness of the model. We compared the results with several models. The experimental results show that the pre-trained model with modified attention provides the best results, where the values of Rouge-l, Rouge-2, and Rouge-L are 41.3%, 27.4%, and 38.0%, respectively. Evaluations on the LCSTS demonstrate that our model outperforms the baseline model. Subsequent analysis showed that our model was able to generate higherquality summaries.","PeriodicalId":412907,"journal":{"name":"2022 4th International Conference on Intelligent Information Processing (IIP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DoS: Abstractive text summarization based on pretrained model with document sharing\",\"authors\":\"Xingxing Ding, Ruo Wang, Zhong Zheng, Xuan Liu, Quan Zhu, Ruiqun Li, Wanru Du, Siyuan Shen\",\"doi\":\"10.1109/IIP57348.2022.00040\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, an abstractive text summarization method with document sharing is proposed. It consists of a pretrained model and self-attention mechanism on multi-document. We call it DoS mechanism. By applying the mechanism to the single-document text summarization task, the model can absorb information from multiple documents, thus enhancing its effectiveness of the model. We compared the results with several models. The experimental results show that the pre-trained model with modified attention provides the best results, where the values of Rouge-l, Rouge-2, and Rouge-L are 41.3%, 27.4%, and 38.0%, respectively. Evaluations on the LCSTS demonstrate that our model outperforms the baseline model. Subsequent analysis showed that our model was able to generate higherquality summaries.\",\"PeriodicalId\":412907,\"journal\":{\"name\":\"2022 4th International Conference on Intelligent Information Processing (IIP)\",\"volume\":\"31 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 4th International Conference on Intelligent Information Processing (IIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IIP57348.2022.00040\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Intelligent Information Processing (IIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IIP57348.2022.00040","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DoS: Abstractive text summarization based on pretrained model with document sharing
In this paper, an abstractive text summarization method with document sharing is proposed. It consists of a pretrained model and self-attention mechanism on multi-document. We call it DoS mechanism. By applying the mechanism to the single-document text summarization task, the model can absorb information from multiple documents, thus enhancing its effectiveness of the model. We compared the results with several models. The experimental results show that the pre-trained model with modified attention provides the best results, where the values of Rouge-l, Rouge-2, and Rouge-L are 41.3%, 27.4%, and 38.0%, respectively. Evaluations on the LCSTS demonstrate that our model outperforms the baseline model. Subsequent analysis showed that our model was able to generate higherquality summaries.