TransLSTM:用于细粒度建议挖掘的 LSTM 和 Transformer 混合模型

Samad Riaz , Amna Saghir , Muhammad Junaid Khan , Hassan Khan , Hamid Saeed Khan , M. Jaleed Khan
{"title":"TransLSTM:用于细粒度建议挖掘的 LSTM 和 Transformer 混合模型","authors":"Samad Riaz ,&nbsp;Amna Saghir ,&nbsp;Muhammad Junaid Khan ,&nbsp;Hassan Khan ,&nbsp;Hamid Saeed Khan ,&nbsp;M. Jaleed Khan","doi":"10.1016/j.nlp.2024.100089","DOIUrl":null,"url":null,"abstract":"<div><p>Digital platforms on the internet are invaluable for collecting user feedback, suggestions, and opinions about various topics, such as company products and services. This data is instrumental in shaping business strategies, enhancing product development, and refining service delivery. Suggestion mining is a key task in natural language processing, which focuses on extracting and analysing suggestions from these digital sources. Initially, suggestion mining utilized manually crafted features, but recent advancements have highlighted the efficacy of deep learning models, which automatically learn features. Models like Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), and Bidirectional Encoder Representations from Transformers (BERT) have been employed in this field. However, considering the relatively small datasets and the faster training time of LSTM compared to BERT, we introduce TransLSTM, a novel LSTM-Transformer hybrid model for suggestion mining. This model aims to automatically pinpoint and extract suggestions by harnessing both local and global text dependencies. It combines the sequential dependency handling of LSTM with the contextual interaction capabilities of the Transformer, thus effectively identifying and extracting suggestions. We evaluated our method against state-of-the-art approaches using the SemEval Task-9 dataset, a benchmark for suggestion mining. Our model shows promising performance, surpassing existing deep learning methods by 6.76% with an F1 score of 0.834 for SubTask A and 0.881 for SubTask B. Additionally, our paper presents an exhaustive literature review on suggestion mining from digital platforms, covering both traditional and state-of-the-art text classification techniques.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"8 ","pages":"Article 100089"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000372/pdfft?md5=01d5468c4cb646548ed9ac72a0da2eb9&pid=1-s2.0-S2949719124000372-main.pdf","citationCount":"0","resultStr":"{\"title\":\"TransLSTM: A hybrid LSTM-Transformer model for fine-grained suggestion mining\",\"authors\":\"Samad Riaz ,&nbsp;Amna Saghir ,&nbsp;Muhammad Junaid Khan ,&nbsp;Hassan Khan ,&nbsp;Hamid Saeed Khan ,&nbsp;M. Jaleed Khan\",\"doi\":\"10.1016/j.nlp.2024.100089\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Digital platforms on the internet are invaluable for collecting user feedback, suggestions, and opinions about various topics, such as company products and services. This data is instrumental in shaping business strategies, enhancing product development, and refining service delivery. Suggestion mining is a key task in natural language processing, which focuses on extracting and analysing suggestions from these digital sources. Initially, suggestion mining utilized manually crafted features, but recent advancements have highlighted the efficacy of deep learning models, which automatically learn features. Models like Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), and Bidirectional Encoder Representations from Transformers (BERT) have been employed in this field. However, considering the relatively small datasets and the faster training time of LSTM compared to BERT, we introduce TransLSTM, a novel LSTM-Transformer hybrid model for suggestion mining. This model aims to automatically pinpoint and extract suggestions by harnessing both local and global text dependencies. It combines the sequential dependency handling of LSTM with the contextual interaction capabilities of the Transformer, thus effectively identifying and extracting suggestions. We evaluated our method against state-of-the-art approaches using the SemEval Task-9 dataset, a benchmark for suggestion mining. Our model shows promising performance, surpassing existing deep learning methods by 6.76% with an F1 score of 0.834 for SubTask A and 0.881 for SubTask B. Additionally, our paper presents an exhaustive literature review on suggestion mining from digital platforms, covering both traditional and state-of-the-art text classification techniques.</p></div>\",\"PeriodicalId\":100944,\"journal\":{\"name\":\"Natural Language Processing Journal\",\"volume\":\"8 \",\"pages\":\"Article 100089\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2949719124000372/pdfft?md5=01d5468c4cb646548ed9ac72a0da2eb9&pid=1-s2.0-S2949719124000372-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Natural Language Processing Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949719124000372\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719124000372","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

互联网上的数字平台对于收集用户对公司产品和服务等各种主题的反馈、建议和意见具有重要价值。这些数据对于制定业务战略、加强产品开发和完善服务提供都非常重要。建议挖掘是自然语言处理中的一项关键任务,其重点是从这些数字资源中提取和分析建议。最初,建议挖掘利用的是人工制作的特征,但最近的进步凸显了自动学习特征的深度学习模型的功效。卷积神经网络(CNN)、循环神经网络(RNN)、长短期记忆(LSTM)和变压器双向编码器表示(BERT)等模型已被应用于该领域。然而,考虑到数据集相对较小,而且与 BERT 相比,LSTM 的训练时间更短,因此我们引入了用于建议挖掘的新型 LSTM 变压器混合模型 TransLSTM。该模型旨在通过利用局部和全局文本依赖关系来自动定位和提取建议。它结合了 LSTM 的顺序依赖处理和 Transformer 的上下文交互能力,从而有效地识别和提取建议。我们使用建议挖掘的基准数据集 SemEval Task-9 对我们的方法与最先进的方法进行了评估。我们的模型表现出了良好的性能,在子任务 A 和子任务 B 中分别以 0.834 和 0.881 的 F1 分数超过现有深度学习方法 6.76% 和 6.76%。此外,我们的论文还对数字平台的建议挖掘进行了详尽的文献综述,涵盖了传统和最先进的文本分类技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TransLSTM: A hybrid LSTM-Transformer model for fine-grained suggestion mining

Digital platforms on the internet are invaluable for collecting user feedback, suggestions, and opinions about various topics, such as company products and services. This data is instrumental in shaping business strategies, enhancing product development, and refining service delivery. Suggestion mining is a key task in natural language processing, which focuses on extracting and analysing suggestions from these digital sources. Initially, suggestion mining utilized manually crafted features, but recent advancements have highlighted the efficacy of deep learning models, which automatically learn features. Models like Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), and Bidirectional Encoder Representations from Transformers (BERT) have been employed in this field. However, considering the relatively small datasets and the faster training time of LSTM compared to BERT, we introduce TransLSTM, a novel LSTM-Transformer hybrid model for suggestion mining. This model aims to automatically pinpoint and extract suggestions by harnessing both local and global text dependencies. It combines the sequential dependency handling of LSTM with the contextual interaction capabilities of the Transformer, thus effectively identifying and extracting suggestions. We evaluated our method against state-of-the-art approaches using the SemEval Task-9 dataset, a benchmark for suggestion mining. Our model shows promising performance, surpassing existing deep learning methods by 6.76% with an F1 score of 0.834 for SubTask A and 0.881 for SubTask B. Additionally, our paper presents an exhaustive literature review on suggestion mining from digital platforms, covering both traditional and state-of-the-art text classification techniques.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信