集成双向LSTM和Inception的文本分类

Wei Jiang, Zhong Jin
{"title":"集成双向LSTM和Inception的文本分类","authors":"Wei Jiang, Zhong Jin","doi":"10.1109/ACPR.2017.113","DOIUrl":null,"url":null,"abstract":"A novel neural network architecture, BLSTM-Inception v1, is proposed for text classification. It mainly consists of the BLSTM-Inception module, which has two parts, and a global max pooling layer. In the first part, forward and backward sequences of hidden states of BLSTM are concatenated as double channels, rather than added as single channel. The second part contains parallel asymmetric convolutions of different scales to extract nonlinear features of multi-granular n-gram phrases from double channels. The global max pooling is used to convert variable-length text into a fixed-length vector. The proposed architecture achieves excellent results on four text classification tasks, including sentiment classifications, subjectivity classification, and especially improves nearly 1.5% on sentence polarity dataset from Pang and Lee compared to BLSTM-2DCNN.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Integrating Bidirectional LSTM with Inception for Text Classification\",\"authors\":\"Wei Jiang, Zhong Jin\",\"doi\":\"10.1109/ACPR.2017.113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A novel neural network architecture, BLSTM-Inception v1, is proposed for text classification. It mainly consists of the BLSTM-Inception module, which has two parts, and a global max pooling layer. In the first part, forward and backward sequences of hidden states of BLSTM are concatenated as double channels, rather than added as single channel. The second part contains parallel asymmetric convolutions of different scales to extract nonlinear features of multi-granular n-gram phrases from double channels. The global max pooling is used to convert variable-length text into a fixed-length vector. The proposed architecture achieves excellent results on four text classification tasks, including sentiment classifications, subjectivity classification, and especially improves nearly 1.5% on sentence polarity dataset from Pang and Lee compared to BLSTM-2DCNN.\",\"PeriodicalId\":426561,\"journal\":{\"name\":\"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACPR.2017.113\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACPR.2017.113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

提出了一种新的神经网络体系结构BLSTM-Inception v1,用于文本分类。它主要由两个部分组成的BLSTM-Inception模块和一个全局最大池化层。在第一部分中,将BLSTM的隐藏状态的前向和后向序列串联为双通道,而不是作为单通道添加。第二部分采用不同尺度的并行非对称卷积,从双通道中提取多颗粒n-gram短语的非线性特征。全局最大池用于将可变长度的文本转换为固定长度的向量。该体系结构在情感分类、主观性分类等4个文本分类任务上取得了优异的成绩,特别是在Pang和Lee的句子极性数据集上,与BLSTM-2DCNN相比,提高了近1.5%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Integrating Bidirectional LSTM with Inception for Text Classification
A novel neural network architecture, BLSTM-Inception v1, is proposed for text classification. It mainly consists of the BLSTM-Inception module, which has two parts, and a global max pooling layer. In the first part, forward and backward sequences of hidden states of BLSTM are concatenated as double channels, rather than added as single channel. The second part contains parallel asymmetric convolutions of different scales to extract nonlinear features of multi-granular n-gram phrases from double channels. The global max pooling is used to convert variable-length text into a fixed-length vector. The proposed architecture achieves excellent results on four text classification tasks, including sentiment classifications, subjectivity classification, and especially improves nearly 1.5% on sentence polarity dataset from Pang and Lee compared to BLSTM-2DCNN.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信