通过多任务微调与辩论数据和知识增强增强零射击姿态检测

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Qinlong Fan, Jicang Lu, Yepeng Sun, Qiankun Pi, Shouxin Shang
{"title":"通过多任务微调与辩论数据和知识增强增强零射击姿态检测","authors":"Qinlong Fan, Jicang Lu, Yepeng Sun, Qiankun Pi, Shouxin Shang","doi":"10.1007/s40747-024-01767-8","DOIUrl":null,"url":null,"abstract":"<p>In the real world, stance detection tasks often involve assessing the stance or attitude of a given text toward new, unseen targets, a task known as zero-shot stance detection. However, zero-shot stance detection often suffers from issues such as sparse data annotation and inherent task complexity, which can lead to lower performance. To address these challenges, we propose combining fine-tuning of Large Language Models (LLMs) with knowledge augmentation for zero-shot stance detection. Specifically, we leverage stance detection and related tasks from debate corpora to perform multi-task fine-tuning of LLMs. This approach aims to learn and transfer the capability of zero-shot stance detection and reasoning analysis from relevant data. Additionally, we enhance the model’s semantic understanding of the given text and targets by retrieving relevant knowledge from external knowledge bases as context, alleviating the lack of relevant contextual knowledge. Compared to ChatGPT, our model achieves a significant improvement in the average F1 score, with an increase of 15.74% on the SemEval 2016 Task 6 A and 3.55% on the P-Stance dataset. Our model outperforms current state-of-the-art models on these two datasets, demonstrating the superiority of multi-task fine-tuning with debate data and knowledge augmentation.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation\",\"authors\":\"Qinlong Fan, Jicang Lu, Yepeng Sun, Qiankun Pi, Shouxin Shang\",\"doi\":\"10.1007/s40747-024-01767-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In the real world, stance detection tasks often involve assessing the stance or attitude of a given text toward new, unseen targets, a task known as zero-shot stance detection. However, zero-shot stance detection often suffers from issues such as sparse data annotation and inherent task complexity, which can lead to lower performance. To address these challenges, we propose combining fine-tuning of Large Language Models (LLMs) with knowledge augmentation for zero-shot stance detection. Specifically, we leverage stance detection and related tasks from debate corpora to perform multi-task fine-tuning of LLMs. This approach aims to learn and transfer the capability of zero-shot stance detection and reasoning analysis from relevant data. Additionally, we enhance the model’s semantic understanding of the given text and targets by retrieving relevant knowledge from external knowledge bases as context, alleviating the lack of relevant contextual knowledge. Compared to ChatGPT, our model achieves a significant improvement in the average F1 score, with an increase of 15.74% on the SemEval 2016 Task 6 A and 3.55% on the P-Stance dataset. Our model outperforms current state-of-the-art models on these two datasets, demonstrating the superiority of multi-task fine-tuning with debate data and knowledge augmentation.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01767-8\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01767-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在现实世界中,姿态检测任务通常涉及评估给定文本对新的、未见目标的姿态或态度,这种任务被称为零镜头姿态检测。然而,零镜头姿态检测通常存在数据注释稀疏和任务固有复杂性等问题,这可能会导致性能降低。为了应对这些挑战,我们建议将大语言模型(LLM)的微调与零拍姿态检测的知识增强相结合。具体来说,我们利用辩论语料库中的立场检测和相关任务对 LLM 进行多任务微调。这种方法旨在从相关数据中学习和转移零镜头立场检测和推理分析的能力。此外,我们还通过检索外部知识库中的相关知识作为上下文来增强模型对给定文本和目标的语义理解,从而缓解相关上下文知识的缺乏。与 ChatGPT 相比,我们的模型显著提高了平均 F1 分数,在 SemEval 2016 Task 6 A 上提高了 15.74%,在 P-Stance 数据集上提高了 3.55%。我们的模型在这两个数据集上的表现优于目前最先进的模型,证明了利用辩论数据和知识增强进行多任务微调的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation

In the real world, stance detection tasks often involve assessing the stance or attitude of a given text toward new, unseen targets, a task known as zero-shot stance detection. However, zero-shot stance detection often suffers from issues such as sparse data annotation and inherent task complexity, which can lead to lower performance. To address these challenges, we propose combining fine-tuning of Large Language Models (LLMs) with knowledge augmentation for zero-shot stance detection. Specifically, we leverage stance detection and related tasks from debate corpora to perform multi-task fine-tuning of LLMs. This approach aims to learn and transfer the capability of zero-shot stance detection and reasoning analysis from relevant data. Additionally, we enhance the model’s semantic understanding of the given text and targets by retrieving relevant knowledge from external knowledge bases as context, alleviating the lack of relevant contextual knowledge. Compared to ChatGPT, our model achieves a significant improvement in the average F1 score, with an increase of 15.74% on the SemEval 2016 Task 6 A and 3.55% on the P-Stance dataset. Our model outperforms current state-of-the-art models on these two datasets, demonstrating the superiority of multi-task fine-tuning with debate data and knowledge augmentation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信