OSATG-GPT: Instruction-tuning large language models with open-source atomic tasks in Github

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Fanyu Han , Li Ma , Fenglin Bi , Yantong Wang , Mingdong You , Wei Wang , Jiaheng Peng , Xiaoya Xia
{"title":"OSATG-GPT: Instruction-tuning large language models with open-source atomic tasks in Github","authors":"Fanyu Han ,&nbsp;Li Ma ,&nbsp;Fenglin Bi ,&nbsp;Yantong Wang ,&nbsp;Mingdong You ,&nbsp;Wei Wang ,&nbsp;Jiaheng Peng ,&nbsp;Xiaoya Xia","doi":"10.1016/j.eswa.2025.129819","DOIUrl":null,"url":null,"abstract":"<div><div>Across numerous application scenarios in Natural Language Processing (NLP), Large Language Models (LLMs) have demonstrated exceptional capabilities in text comprehension and generation. These models exhibit significant potential across various interdisciplinary fields. However, their effectiveness is somewhat constrained by the unique characteristics of the open-source ecosystem. Developing an LLM with generalization capabilities across datasets and tasks, specifically tailored for the open-source ecosystem, is an urgent research need. To address this challenge, this paper introduces open-source atomic tasks, which are defined as intermediate tasks essential for solving complex objectives. These tasks are designed through strategies such as simplification, reversal, decomposition, and composition, enabling models to gradually acquire domain knowledge and understand task interdependencies. By integrating public resources with open-source atomic tasks, we construct OSE-Instruct–an instruction dataset for the open-source ecosystem. We first unify open-source atomic tasks within an instruction-tuning paradigm that reflects real-world developer behavior, and develop OSATG-GPT at various parameter scales by fine-tuning the BLOOMZ backbone model on OSE-Instruct. This enables the model to learn fine-grained developer actions and the underlying task dependencies. Extensive experiments validate the effectiveness of OSATG-GPT compared to other advanced LLMs with larger parameter scales, and highlight its advantages over GPT-4 in specific and complex open-source collaboration tasks.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"298 ","pages":"Article 129819"},"PeriodicalIF":7.5000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425034347","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Across numerous application scenarios in Natural Language Processing (NLP), Large Language Models (LLMs) have demonstrated exceptional capabilities in text comprehension and generation. These models exhibit significant potential across various interdisciplinary fields. However, their effectiveness is somewhat constrained by the unique characteristics of the open-source ecosystem. Developing an LLM with generalization capabilities across datasets and tasks, specifically tailored for the open-source ecosystem, is an urgent research need. To address this challenge, this paper introduces open-source atomic tasks, which are defined as intermediate tasks essential for solving complex objectives. These tasks are designed through strategies such as simplification, reversal, decomposition, and composition, enabling models to gradually acquire domain knowledge and understand task interdependencies. By integrating public resources with open-source atomic tasks, we construct OSE-Instruct–an instruction dataset for the open-source ecosystem. We first unify open-source atomic tasks within an instruction-tuning paradigm that reflects real-world developer behavior, and develop OSATG-GPT at various parameter scales by fine-tuning the BLOOMZ backbone model on OSE-Instruct. This enables the model to learn fine-grained developer actions and the underlying task dependencies. Extensive experiments validate the effectiveness of OSATG-GPT compared to other advanced LLMs with larger parameter scales, and highlight its advantages over GPT-4 in specific and complex open-source collaboration tasks.
osat - gpt:在Github中使用开源原子任务对大型语言模型进行指令调优
在自然语言处理(NLP)的众多应用场景中,大型语言模型(llm)在文本理解和生成方面表现出了卓越的能力。这些模型在各个跨学科领域显示出巨大的潜力。然而,它们的有效性在某种程度上受到开源生态系统的独特特征的限制。开发一个具有跨数据集和任务的泛化能力的LLM,专门为开源生态系统量身定制,是一个迫切的研究需求。为了应对这一挑战,本文引入了开源原子任务,它被定义为解决复杂目标所必需的中间任务。这些任务是通过诸如简化、反转、分解和组合等策略来设计的,使模型能够逐渐获得领域知识并理解任务的相互依赖性。通过将公共资源与开源原子任务相结合,构建了面向开源生态系统的指令数据集ose - instruction。我们首先将开源原子任务统一到一个反映现实世界开发人员行为的指令调优范式中,并通过在os - instruction上微调BLOOMZ主干模型,在各种参数尺度上开发OSATG-GPT。这使模型能够学习细粒度的开发人员操作和底层任务依赖关系。大量的实验验证了OSATG-GPT与其他更大参数尺度的先进llm相比的有效性,并突出了其在特定和复杂的开源协作任务中相对于GPT-4的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Expert Systems with Applications
Expert Systems with Applications 工程技术-工程:电子与电气
CiteScore
13.80
自引率
10.60%
发文量
2045
审稿时长
8.7 months
期刊介绍: Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信