Attention model with multi-layer supervision for text Classification

Chunyi Yue, Hanqiang Cao, Guoping Xu, Youli Dong
{"title":"Attention model with multi-layer supervision for text Classification","authors":"Chunyi Yue, Hanqiang Cao, Guoping Xu, Youli Dong","doi":"10.1145/3395260.3395290","DOIUrl":null,"url":null,"abstract":"Text classification is a classic topic in natural language processing. In this study, we propose an attention model with multi-layer supervision for this task. In our model, the previous context vector is directly used as attention to select the required features, and multi-layer supervision is used for text classification, i.e., the prediction losses are combined across all layers in the global cost function. The main contribution of our model is that the context vector is not only used as attention but also as a representation of an input text for classification at each layer. We conducted experiments based on five benchmark text classification data sets and the results indicate that our model can improve classification performance when applied to most of the data sets.","PeriodicalId":103490,"journal":{"name":"Proceedings of the 2020 5th International Conference on Mathematics and Artificial Intelligence","volume":"8 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 5th International Conference on Mathematics and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3395260.3395290","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Text classification is a classic topic in natural language processing. In this study, we propose an attention model with multi-layer supervision for this task. In our model, the previous context vector is directly used as attention to select the required features, and multi-layer supervision is used for text classification, i.e., the prediction losses are combined across all layers in the global cost function. The main contribution of our model is that the context vector is not only used as attention but also as a representation of an input text for classification at each layer. We conducted experiments based on five benchmark text classification data sets and the results indicate that our model can improve classification performance when applied to most of the data sets.
基于多层监督的文本分类注意模型
文本分类是自然语言处理中的一个经典课题。在这项研究中,我们提出了一个多层监督的注意模型。在我们的模型中,直接使用之前的上下文向量作为关注来选择所需的特征,并使用多层监督来进行文本分类,即在全局代价函数中将所有层的预测损失组合起来。我们的模型的主要贡献是上下文向量不仅用作注意力,而且还用作每层分类的输入文本的表示。我们基于5个基准文本分类数据集进行了实验,结果表明我们的模型在应用于大多数数据集时都能提高分类性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信