Soft Prompt-tuning with Self-Resource Verbalizer for short text streams

IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Yi Zhu , Ye Wang , Yun Li , Jipeng Qiang , Yunhao Yuan
{"title":"Soft Prompt-tuning with Self-Resource Verbalizer for short text streams","authors":"Yi Zhu ,&nbsp;Ye Wang ,&nbsp;Yun Li ,&nbsp;Jipeng Qiang ,&nbsp;Yunhao Yuan","doi":"10.1016/j.engappai.2024.109589","DOIUrl":null,"url":null,"abstract":"<div><div>Short text streams such as real-time news and search snippets have attained vast amounts of attention and research in recent decades, the characteristics of high generation velocity, feature sparsity, and high ambiguity accentuate both the importance and challenges to language models. However, most of the existing short text stream classification methods can neither automatically select relevant knowledge components for arbitrary samples, nor expand knowledge internally instead of rely on external open knowledge base to address the inherent limitations of short text stream. In this paper, we propose a Soft Prompt-tuning with Self-Resource Verbalizer (SPSV for short) for short text stream classification, the soft prompt with self-resource knowledgeable expansion is conducted for updating label words space to address evolved semantic topics in the data streams. Specifically, the automatic constructed prompt is first generated to instruct the model prediction, which is optimized to address the problem of high velocity and topic drift in short text streams. Then, in each chunk, the projection between category names and label words space, i.e. verbalizer, is updated, which is constructed by internal knowledge expansion from the short text itself. Through comprehensive experiments on four well-known benchmark datasets, we validate the superb performance of our method compared to other short text stream classification and fine-tuning PLMs methods, which achieves up to more than 90% classification accuracy with the counts of data chunk increased.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109589"},"PeriodicalIF":7.5000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624017470","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Short text streams such as real-time news and search snippets have attained vast amounts of attention and research in recent decades, the characteristics of high generation velocity, feature sparsity, and high ambiguity accentuate both the importance and challenges to language models. However, most of the existing short text stream classification methods can neither automatically select relevant knowledge components for arbitrary samples, nor expand knowledge internally instead of rely on external open knowledge base to address the inherent limitations of short text stream. In this paper, we propose a Soft Prompt-tuning with Self-Resource Verbalizer (SPSV for short) for short text stream classification, the soft prompt with self-resource knowledgeable expansion is conducted for updating label words space to address evolved semantic topics in the data streams. Specifically, the automatic constructed prompt is first generated to instruct the model prediction, which is optimized to address the problem of high velocity and topic drift in short text streams. Then, in each chunk, the projection between category names and label words space, i.e. verbalizer, is updated, which is constructed by internal knowledge expansion from the short text itself. Through comprehensive experiments on four well-known benchmark datasets, we validate the superb performance of our method compared to other short text stream classification and fine-tuning PLMs methods, which achieves up to more than 90% classification accuracy with the counts of data chunk increased.
利用自资源口头表达器对短文本流进行软提示调整
近几十年来,实时新闻和搜索片段等短文本流得到了广泛的关注和研究,其高速生成、特征稀疏和高度模糊的特点凸显了语言模型的重要性和挑战性。然而,现有的大多数短文本流分类方法既不能针对任意样本自动选择相关知识组件,也不能在内部扩展知识而不是依赖外部开放知识库来解决短文本流的固有局限性。在本文中,我们提出了一种针对短文本流分类的软提示与自资源知识扩展(Soft Prompt-tuning with Self-Resource Verbalizer,简称 SPSV)。具体来说,首先生成自动构建的提示来指导模型预测,并对其进行优化,以解决短文本流中的高速度和主题漂移问题。然后,在每个分块中,更新类别名称和标签词空间之间的投影,即口头化器(verbalizer),它是由短文本本身的内部知识扩展构建的。通过在四个知名基准数据集上的综合实验,我们验证了与其他短文本流分类和微调 PLMs 方法相比,我们的方法具有卓越的性能,随着数据块数量的增加,分类准确率可达 90% 以上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 工程技术-工程:电子与电气
CiteScore
9.60
自引率
10.00%
发文量
505
审稿时长
68 days
期刊介绍: Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信