Qwen2.5-Math 技术报告:通过自我完善建立数学专家模型

An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, Zhenru Zhang
{"title":"Qwen2.5-Math 技术报告:通过自我完善建立数学专家模型","authors":"An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, Zhenru Zhang","doi":"arxiv-2409.12122","DOIUrl":null,"url":null,"abstract":"In this report, we present a series of math-specific large language models:\nQwen2.5-Math and Qwen2.5-Math-Instruct-1.5B/7B/72B. The core innovation of the\nQwen2.5 series lies in integrating the philosophy of self-improvement\nthroughout the entire pipeline, from pre-training and post-training to\ninference: (1) During the pre-training phase, Qwen2-Math-Instruct is utilized\nto generate large-scale, high-quality mathematical data. (2) In the\npost-training phase, we develop a reward model (RM) by conducting massive\nsampling from Qwen2-Math-Instruct. This RM is then applied to the iterative\nevolution of data in supervised fine-tuning (SFT). With a stronger SFT model,\nit's possible to iteratively train and update the RM, which in turn guides the\nnext round of SFT data iteration. On the final SFT model, we employ the\nultimate RM for reinforcement learning, resulting in the Qwen2.5-Math-Instruct.\n(3) Furthermore, during the inference stage, the RM is used to guide sampling,\noptimizing the model's performance. Qwen2.5-Math-Instruct supports both Chinese and English, and possess advanced\nmathematical reasoning capabilities, including Chain-of-Thought (CoT) and\nTool-Integrated Reasoning (TIR). We evaluate our models on 10 mathematics\ndatasets in both English and Chinese, such as GSM8K, MATH, GaoKao, AMC23, and\nAIME24, covering a range of difficulties from grade school level to math\ncompetition problems.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement\",\"authors\":\"An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, Zhenru Zhang\",\"doi\":\"arxiv-2409.12122\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this report, we present a series of math-specific large language models:\\nQwen2.5-Math and Qwen2.5-Math-Instruct-1.5B/7B/72B. The core innovation of the\\nQwen2.5 series lies in integrating the philosophy of self-improvement\\nthroughout the entire pipeline, from pre-training and post-training to\\ninference: (1) During the pre-training phase, Qwen2-Math-Instruct is utilized\\nto generate large-scale, high-quality mathematical data. (2) In the\\npost-training phase, we develop a reward model (RM) by conducting massive\\nsampling from Qwen2-Math-Instruct. This RM is then applied to the iterative\\nevolution of data in supervised fine-tuning (SFT). With a stronger SFT model,\\nit's possible to iteratively train and update the RM, which in turn guides the\\nnext round of SFT data iteration. On the final SFT model, we employ the\\nultimate RM for reinforcement learning, resulting in the Qwen2.5-Math-Instruct.\\n(3) Furthermore, during the inference stage, the RM is used to guide sampling,\\noptimizing the model's performance. Qwen2.5-Math-Instruct supports both Chinese and English, and possess advanced\\nmathematical reasoning capabilities, including Chain-of-Thought (CoT) and\\nTool-Integrated Reasoning (TIR). We evaluate our models on 10 mathematics\\ndatasets in both English and Chinese, such as GSM8K, MATH, GaoKao, AMC23, and\\nAIME24, covering a range of difficulties from grade school level to math\\ncompetition problems.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.12122\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12122","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在本报告中,我们介绍了一系列数学专用大型语言模型:Qwen2.5-Math 和 Qwen2.5-Math-Instruct-1.5B/7B/72B。Qwen2.5系列的核心创新在于将自我完善的理念贯穿于从预训练、后训练到推理的整个流程:(1)在预训练阶段,利用Qwen2-Math-Instruct生成大规模、高质量的数学数据;(2)在后训练阶段,利用Qwen2-Math-Instruct生成大规模、高质量的数学数据;(3)在推理阶段,利用Qwen2-Math-Instruct生成大规模、高质量的推理数据。(2) 在后训练阶段,我们通过对 Qwen2-Math-Instruct 进行大规模采样,建立奖励模型(RM)。然后,在监督微调(SFT)中将该奖励模型应用于数据的迭代进化。有了更强大的 SFT 模型,就可以迭代训练和更新 RM,进而指导下一轮 SFT 数据迭代。在最终的 SFT 模型上,我们使用最终的 RM 进行强化学习,从而得到 Qwen2.5-Math-Instruct(3)。Qwen2.5-Math-Instruct支持中英文,并具有高级数学推理能力,包括思维链(CoT)和工具集成推理(TIR)。我们在 10 个中英文数学数据集(如 GSM8K、MATH、GaoKao、AMC23 和 AIME24)上对我们的模型进行了评估,涵盖了从小学水平到数学竞赛问题的各种难度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement
In this report, we present a series of math-specific large language models: Qwen2.5-Math and Qwen2.5-Math-Instruct-1.5B/7B/72B. The core innovation of the Qwen2.5 series lies in integrating the philosophy of self-improvement throughout the entire pipeline, from pre-training and post-training to inference: (1) During the pre-training phase, Qwen2-Math-Instruct is utilized to generate large-scale, high-quality mathematical data. (2) In the post-training phase, we develop a reward model (RM) by conducting massive sampling from Qwen2-Math-Instruct. This RM is then applied to the iterative evolution of data in supervised fine-tuning (SFT). With a stronger SFT model, it's possible to iteratively train and update the RM, which in turn guides the next round of SFT data iteration. On the final SFT model, we employ the ultimate RM for reinforcement learning, resulting in the Qwen2.5-Math-Instruct. (3) Furthermore, during the inference stage, the RM is used to guide sampling, optimizing the model's performance. Qwen2.5-Math-Instruct supports both Chinese and English, and possess advanced mathematical reasoning capabilities, including Chain-of-Thought (CoT) and Tool-Integrated Reasoning (TIR). We evaluate our models on 10 mathematics datasets in both English and Chinese, such as GSM8K, MATH, GaoKao, AMC23, and AIME24, covering a range of difficulties from grade school level to math competition problems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信