How to Churn Deep Contextual Models?

Mohammad Rashedul Hasan
{"title":"How to Churn Deep Contextual Models?","authors":"Mohammad Rashedul Hasan","doi":"10.1145/3486622.3493962","DOIUrl":null,"url":null,"abstract":"This paper searches for optimal ways of employing deep contextual models to solve practical natural language processing tasks. It addresses the diversity in the problem space by utilizing a variety of techniques that are based on the deep contextual BERT (Bidirectional Encoder Representation from Transformer) model. A collection of datasets on COVID-19 social media misinformation is used to capture the challenge in the misinformation detection task that arises from small labeled data, noisy labels, out-of-distribution (OOD) data, fine-grained & nuanced categories, and heavily-skewed class distribution. To address this diversity, both domain-agnostic (DA) and domain-specific (DS) BERT pretrained models (PTMs) for transfer learning are examined via two methods, i.e., fine-tuning (FT) and extracted feature-based (FB) learning. The FB is implemented using two approaches: non-hierarchical (features extracted from a single hidden layer) and hierarchical (features extracted from a subset of hidden layers are first aggregated, then passed to a neural network for further extraction). Results obtained from an extensive set of experiments show that FB is more effective than FT and that hierarchical FB is more generalizable. However, on the OOD data, the deep contextual models are less generalizable. It identifies the condition under which DS PTM is beneficial. Finally, bigger models may only add an incremental benefit and sometimes degrade the performance.","PeriodicalId":89230,"journal":{"name":"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3486622.3493962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper searches for optimal ways of employing deep contextual models to solve practical natural language processing tasks. It addresses the diversity in the problem space by utilizing a variety of techniques that are based on the deep contextual BERT (Bidirectional Encoder Representation from Transformer) model. A collection of datasets on COVID-19 social media misinformation is used to capture the challenge in the misinformation detection task that arises from small labeled data, noisy labels, out-of-distribution (OOD) data, fine-grained & nuanced categories, and heavily-skewed class distribution. To address this diversity, both domain-agnostic (DA) and domain-specific (DS) BERT pretrained models (PTMs) for transfer learning are examined via two methods, i.e., fine-tuning (FT) and extracted feature-based (FB) learning. The FB is implemented using two approaches: non-hierarchical (features extracted from a single hidden layer) and hierarchical (features extracted from a subset of hidden layers are first aggregated, then passed to a neural network for further extraction). Results obtained from an extensive set of experiments show that FB is more effective than FT and that hierarchical FB is more generalizable. However, on the OOD data, the deep contextual models are less generalizable. It identifies the condition under which DS PTM is beneficial. Finally, bigger models may only add an incremental benefit and sometimes degrade the performance.
如何搅动深度上下文模型?
本文寻找使用深度上下文模型来解决实际自然语言处理任务的最佳方法。它通过利用基于深度上下文BERT(来自转换器的双向编码器表示)模型的各种技术来解决问题空间的多样性。关于COVID-19社交媒体错误信息的数据集用于捕捉错误信息检测任务中的挑战,这些挑战来自小标记数据、噪声标签、分布外(OOD)数据、细粒度和细微差别类别以及严重偏斜的类别分布。为了解决这种多样性,通过两种方法,即微调(FT)和基于提取特征的(FB)学习,对迁移学习的领域不可知(DA)和领域特定(DS) BERT预训练模型(PTMs)进行了检查。FB使用两种方法实现:非分层(从单个隐藏层提取特征)和分层(首先从隐藏层的子集提取特征,然后传递给神经网络进行进一步提取)。大量的实验结果表明,FB比FT更有效,分层FB更具有可泛化性。然而,在OOD数据上,深度上下文模型的泛化性较差。它确定了DS - PTM有益的条件。最后,更大的模型可能只会增加增量收益,有时会降低性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信