Insufficient task description can impair in-context learning: A study from information perspective

IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Meidai Xuanyuan , Tao Yang , Jingwen Fu , Sicheng Zhao , Yuwang Wang
{"title":"Insufficient task description can impair in-context learning: A study from information perspective","authors":"Meidai Xuanyuan ,&nbsp;Tao Yang ,&nbsp;Jingwen Fu ,&nbsp;Sicheng Zhao ,&nbsp;Yuwang Wang","doi":"10.1016/j.inffus.2025.103116","DOIUrl":null,"url":null,"abstract":"<div><div>In-context learning, an essential technique in transformer-based models, relies on two main sources of information: in-context examples and task descriptions. While extensive research has focused on the influence of in-context examples, the role of task descriptions remains underexplored, despite its practical significance. This paper investigates how task descriptions impact the in-context learning performance of transformers and how these two sources of information can be effectively fused. We design a synthetic experimental framework to control the information provided in task descriptions and conduct a series of experiments where task description details are systematically varied. Our findings reveal the dual roles of task descriptions: an insufficient task description will cause the model to overlook in-context examples, leading to poor in-context performance; once the amount of information in the task description exceeds a certain threshold, the impact of the task description shifts from negative to positive, and a performance emergence can be observed. We replicate these findings on GPT-4, observing a similar double-sided effect. This study highlights the critical role of task descriptions in in-context learning, offering valuable insights for future applications of transformer models.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103116"},"PeriodicalIF":14.7000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525001897","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In-context learning, an essential technique in transformer-based models, relies on two main sources of information: in-context examples and task descriptions. While extensive research has focused on the influence of in-context examples, the role of task descriptions remains underexplored, despite its practical significance. This paper investigates how task descriptions impact the in-context learning performance of transformers and how these two sources of information can be effectively fused. We design a synthetic experimental framework to control the information provided in task descriptions and conduct a series of experiments where task description details are systematically varied. Our findings reveal the dual roles of task descriptions: an insufficient task description will cause the model to overlook in-context examples, leading to poor in-context performance; once the amount of information in the task description exceeds a certain threshold, the impact of the task description shifts from negative to positive, and a performance emergence can be observed. We replicate these findings on GPT-4, observing a similar double-sided effect. This study highlights the critical role of task descriptions in in-context learning, offering valuable insights for future applications of transformer models.
任务描述不足会影响情境学习:一项信息视角的研究
上下文学习是基于转换器的模型中的一项基本技术,它依赖于两个主要的信息来源:上下文示例和任务描述。虽然广泛的研究集中在上下文示例的影响上,但任务描述的作用仍未得到充分探索,尽管它具有实际意义。本文研究任务描述如何影响变形器的上下文学习性能,以及如何有效地融合这两种信息来源。我们设计了一个综合实验框架来控制任务描述中提供的信息,并进行了一系列实验,其中任务描述细节是系统变化的。我们的研究结果揭示了任务描述的双重作用:不充分的任务描述会导致模型忽略上下文示例,导致上下文性能差;一旦任务描述中的信息量超过一定的阈值,任务描述的影响就会由负面转变为积极,并且可以观察到性能涌现。我们在GPT-4上重复了这些发现,观察到类似的双重效应。本研究强调了任务描述在情境学习中的关键作用,为变压器模型的未来应用提供了有价值的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信