Yi Zhu , Ye Wang , Yun Li , Jipeng Qiang , Yunhao Yuan
{"title":"利用自资源口头表达器对短文本流进行软提示调整","authors":"Yi Zhu , Ye Wang , Yun Li , Jipeng Qiang , Yunhao Yuan","doi":"10.1016/j.engappai.2024.109589","DOIUrl":null,"url":null,"abstract":"<div><div>Short text streams such as real-time news and search snippets have attained vast amounts of attention and research in recent decades, the characteristics of high generation velocity, feature sparsity, and high ambiguity accentuate both the importance and challenges to language models. However, most of the existing short text stream classification methods can neither automatically select relevant knowledge components for arbitrary samples, nor expand knowledge internally instead of rely on external open knowledge base to address the inherent limitations of short text stream. In this paper, we propose a Soft Prompt-tuning with Self-Resource Verbalizer (SPSV for short) for short text stream classification, the soft prompt with self-resource knowledgeable expansion is conducted for updating label words space to address evolved semantic topics in the data streams. Specifically, the automatic constructed prompt is first generated to instruct the model prediction, which is optimized to address the problem of high velocity and topic drift in short text streams. Then, in each chunk, the projection between category names and label words space, i.e. verbalizer, is updated, which is constructed by internal knowledge expansion from the short text itself. Through comprehensive experiments on four well-known benchmark datasets, we validate the superb performance of our method compared to other short text stream classification and fine-tuning PLMs methods, which achieves up to more than 90% classification accuracy with the counts of data chunk increased.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109589"},"PeriodicalIF":7.5000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Soft Prompt-tuning with Self-Resource Verbalizer for short text streams\",\"authors\":\"Yi Zhu , Ye Wang , Yun Li , Jipeng Qiang , Yunhao Yuan\",\"doi\":\"10.1016/j.engappai.2024.109589\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Short text streams such as real-time news and search snippets have attained vast amounts of attention and research in recent decades, the characteristics of high generation velocity, feature sparsity, and high ambiguity accentuate both the importance and challenges to language models. However, most of the existing short text stream classification methods can neither automatically select relevant knowledge components for arbitrary samples, nor expand knowledge internally instead of rely on external open knowledge base to address the inherent limitations of short text stream. In this paper, we propose a Soft Prompt-tuning with Self-Resource Verbalizer (SPSV for short) for short text stream classification, the soft prompt with self-resource knowledgeable expansion is conducted for updating label words space to address evolved semantic topics in the data streams. Specifically, the automatic constructed prompt is first generated to instruct the model prediction, which is optimized to address the problem of high velocity and topic drift in short text streams. Then, in each chunk, the projection between category names and label words space, i.e. verbalizer, is updated, which is constructed by internal knowledge expansion from the short text itself. Through comprehensive experiments on four well-known benchmark datasets, we validate the superb performance of our method compared to other short text stream classification and fine-tuning PLMs methods, which achieves up to more than 90% classification accuracy with the counts of data chunk increased.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"139 \",\"pages\":\"Article 109589\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197624017470\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624017470","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
近几十年来,实时新闻和搜索片段等短文本流得到了广泛的关注和研究,其高速生成、特征稀疏和高度模糊的特点凸显了语言模型的重要性和挑战性。然而,现有的大多数短文本流分类方法既不能针对任意样本自动选择相关知识组件,也不能在内部扩展知识而不是依赖外部开放知识库来解决短文本流的固有局限性。在本文中,我们提出了一种针对短文本流分类的软提示与自资源知识扩展(Soft Prompt-tuning with Self-Resource Verbalizer,简称 SPSV)。具体来说,首先生成自动构建的提示来指导模型预测,并对其进行优化,以解决短文本流中的高速度和主题漂移问题。然后,在每个分块中,更新类别名称和标签词空间之间的投影,即口头化器(verbalizer),它是由短文本本身的内部知识扩展构建的。通过在四个知名基准数据集上的综合实验,我们验证了与其他短文本流分类和微调 PLMs 方法相比,我们的方法具有卓越的性能,随着数据块数量的增加,分类准确率可达 90% 以上。
Soft Prompt-tuning with Self-Resource Verbalizer for short text streams
Short text streams such as real-time news and search snippets have attained vast amounts of attention and research in recent decades, the characteristics of high generation velocity, feature sparsity, and high ambiguity accentuate both the importance and challenges to language models. However, most of the existing short text stream classification methods can neither automatically select relevant knowledge components for arbitrary samples, nor expand knowledge internally instead of rely on external open knowledge base to address the inherent limitations of short text stream. In this paper, we propose a Soft Prompt-tuning with Self-Resource Verbalizer (SPSV for short) for short text stream classification, the soft prompt with self-resource knowledgeable expansion is conducted for updating label words space to address evolved semantic topics in the data streams. Specifically, the automatic constructed prompt is first generated to instruct the model prediction, which is optimized to address the problem of high velocity and topic drift in short text streams. Then, in each chunk, the projection between category names and label words space, i.e. verbalizer, is updated, which is constructed by internal knowledge expansion from the short text itself. Through comprehensive experiments on four well-known benchmark datasets, we validate the superb performance of our method compared to other short text stream classification and fine-tuning PLMs methods, which achieves up to more than 90% classification accuracy with the counts of data chunk increased.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.