{"title":"集成双向LSTM和Inception的文本分类","authors":"Wei Jiang, Zhong Jin","doi":"10.1109/ACPR.2017.113","DOIUrl":null,"url":null,"abstract":"A novel neural network architecture, BLSTM-Inception v1, is proposed for text classification. It mainly consists of the BLSTM-Inception module, which has two parts, and a global max pooling layer. In the first part, forward and backward sequences of hidden states of BLSTM are concatenated as double channels, rather than added as single channel. The second part contains parallel asymmetric convolutions of different scales to extract nonlinear features of multi-granular n-gram phrases from double channels. The global max pooling is used to convert variable-length text into a fixed-length vector. The proposed architecture achieves excellent results on four text classification tasks, including sentiment classifications, subjectivity classification, and especially improves nearly 1.5% on sentence polarity dataset from Pang and Lee compared to BLSTM-2DCNN.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Integrating Bidirectional LSTM with Inception for Text Classification\",\"authors\":\"Wei Jiang, Zhong Jin\",\"doi\":\"10.1109/ACPR.2017.113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A novel neural network architecture, BLSTM-Inception v1, is proposed for text classification. It mainly consists of the BLSTM-Inception module, which has two parts, and a global max pooling layer. In the first part, forward and backward sequences of hidden states of BLSTM are concatenated as double channels, rather than added as single channel. The second part contains parallel asymmetric convolutions of different scales to extract nonlinear features of multi-granular n-gram phrases from double channels. The global max pooling is used to convert variable-length text into a fixed-length vector. The proposed architecture achieves excellent results on four text classification tasks, including sentiment classifications, subjectivity classification, and especially improves nearly 1.5% on sentence polarity dataset from Pang and Lee compared to BLSTM-2DCNN.\",\"PeriodicalId\":426561,\"journal\":{\"name\":\"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACPR.2017.113\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACPR.2017.113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Integrating Bidirectional LSTM with Inception for Text Classification
A novel neural network architecture, BLSTM-Inception v1, is proposed for text classification. It mainly consists of the BLSTM-Inception module, which has two parts, and a global max pooling layer. In the first part, forward and backward sequences of hidden states of BLSTM are concatenated as double channels, rather than added as single channel. The second part contains parallel asymmetric convolutions of different scales to extract nonlinear features of multi-granular n-gram phrases from double channels. The global max pooling is used to convert variable-length text into a fixed-length vector. The proposed architecture achieves excellent results on four text classification tasks, including sentiment classifications, subjectivity classification, and especially improves nearly 1.5% on sentence polarity dataset from Pang and Lee compared to BLSTM-2DCNN.