{"title":"LSTM中局部注意对抽象文本摘要的影响","authors":"Puruso Muhammad Hanunggul, S. Suyanto","doi":"10.1109/ISRITI48646.2019.9034616","DOIUrl":null,"url":null,"abstract":"An attentional mechanism is very important to enhance a neural machine translation (NMT). There are two classes of attentions: global and local attentions. This paper focuses on comparing the impact of the local attention in Long Short-Term Memory (LSTM) model to generate an abstractive text summarization (ATS). Developing a model using a dataset of Amazon Fine Food Reviews and evaluating it using dataset of GloVe shows that the global attention-based model produces better ROUGE-1, where it generates more words contained in the actual summary. But, the local attention-based gives higher ROUGE-2, where it generates more pairs of words contained in the actual summary, since the mechanism of local attention considers the subset of input words instead of the whole input words.","PeriodicalId":367363,"journal":{"name":"2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"The Impact of Local Attention in LSTM for Abstractive Text Summarization\",\"authors\":\"Puruso Muhammad Hanunggul, S. Suyanto\",\"doi\":\"10.1109/ISRITI48646.2019.9034616\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"An attentional mechanism is very important to enhance a neural machine translation (NMT). There are two classes of attentions: global and local attentions. This paper focuses on comparing the impact of the local attention in Long Short-Term Memory (LSTM) model to generate an abstractive text summarization (ATS). Developing a model using a dataset of Amazon Fine Food Reviews and evaluating it using dataset of GloVe shows that the global attention-based model produces better ROUGE-1, where it generates more words contained in the actual summary. But, the local attention-based gives higher ROUGE-2, where it generates more pairs of words contained in the actual summary, since the mechanism of local attention considers the subset of input words instead of the whole input words.\",\"PeriodicalId\":367363,\"journal\":{\"name\":\"2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISRITI48646.2019.9034616\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISRITI48646.2019.9034616","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20
摘要
注意机制是提高神经机器翻译能力的关键。关注有两类:全局关注和局部关注。本文比较了局部注意对长短期记忆(LSTM)模型生成抽象文本摘要(ATS)的影响。使用Amazon Fine Food Reviews的数据集开发一个模型,并使用GloVe的数据集对其进行评估,结果表明,基于全局注意力的模型产生了更好的ROUGE-1,它生成了更多包含在实际摘要中的单词。但是,基于局部注意的方法给出了更高的ROUGE-2,它生成了更多包含在实际摘要中的词对,因为局部注意的机制考虑的是输入词的子集而不是整个输入词。
The Impact of Local Attention in LSTM for Abstractive Text Summarization
An attentional mechanism is very important to enhance a neural machine translation (NMT). There are two classes of attentions: global and local attentions. This paper focuses on comparing the impact of the local attention in Long Short-Term Memory (LSTM) model to generate an abstractive text summarization (ATS). Developing a model using a dataset of Amazon Fine Food Reviews and evaluating it using dataset of GloVe shows that the global attention-based model produces better ROUGE-1, where it generates more words contained in the actual summary. But, the local attention-based gives higher ROUGE-2, where it generates more pairs of words contained in the actual summary, since the mechanism of local attention considers the subset of input words instead of the whole input words.