小代码语言模型的课程学习

Marwa Naïr, Kamel Yamani, Lynda Said Lhadj, Riyadh Baghdadi
{"title":"小代码语言模型的课程学习","authors":"Marwa Naïr, Kamel Yamani, Lynda Said Lhadj, Riyadh Baghdadi","doi":"arxiv-2407.10194","DOIUrl":null,"url":null,"abstract":"Code language models have emerged as useful tools for various programming\ntasks, yet they often struggle when it comes to complex ones. In this paper, we\nexplore the potential of curriculum learning in enhancing the performance of\nthese models. While prior research has suggested that curriculum learning does\nnot necessarily help in improving the performance of language models, our\nresults surprisingly show that this may not be the case for code language\nmodels. We demonstrate that a well-designed curriculum learning approach\nsignificantly improves the accuracy of small decoder-only code language models\non the task of code execution, while its effect on code completion is less\nsignificant. To explore the potential of curriculum learning, we train multiple\nGPT models with 1 million parameters each to predict the next token and\nevaluate them on code completion and execution tasks. Our contributions include\nproposing a novel code difficulty assessment metric by combining software code\nmeasures, investigating the effectiveness of Curriculum Learning for code\nlanguage models, and introducing a Novel Curriculum Learning schedule that\nenhances the performance of small decoder-only language models in code\nexecution tasks. The results of this paper open the door for more research on\nthe use of curriculum learning for code language models.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Curriculum Learning for Small Code Language Models\",\"authors\":\"Marwa Naïr, Kamel Yamani, Lynda Said Lhadj, Riyadh Baghdadi\",\"doi\":\"arxiv-2407.10194\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Code language models have emerged as useful tools for various programming\\ntasks, yet they often struggle when it comes to complex ones. In this paper, we\\nexplore the potential of curriculum learning in enhancing the performance of\\nthese models. While prior research has suggested that curriculum learning does\\nnot necessarily help in improving the performance of language models, our\\nresults surprisingly show that this may not be the case for code language\\nmodels. We demonstrate that a well-designed curriculum learning approach\\nsignificantly improves the accuracy of small decoder-only code language models\\non the task of code execution, while its effect on code completion is less\\nsignificant. To explore the potential of curriculum learning, we train multiple\\nGPT models with 1 million parameters each to predict the next token and\\nevaluate them on code completion and execution tasks. Our contributions include\\nproposing a novel code difficulty assessment metric by combining software code\\nmeasures, investigating the effectiveness of Curriculum Learning for code\\nlanguage models, and introducing a Novel Curriculum Learning schedule that\\nenhances the performance of small decoder-only language models in code\\nexecution tasks. The results of this paper open the door for more research on\\nthe use of curriculum learning for code language models.\",\"PeriodicalId\":501197,\"journal\":{\"name\":\"arXiv - CS - Programming Languages\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Programming Languages\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.10194\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Programming Languages","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.10194","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

代码语言模型已成为各种编程任务的有用工具,但它们在处理复杂任务时往往会遇到困难。在本文中,我们探讨了课程学习在提高这些模型性能方面的潜力。以往的研究表明,课程学习并不一定有助于提高语言模型的性能,而我们的研究结果却出人意料地表明,代码语言模型的情况可能并非如此。我们证明,精心设计的课程学习方法可以显著提高小型纯解码器代码语言模型在代码执行任务上的准确性,而其对代码完成的影响则不那么显著。为了探索课程学习的潜力,我们训练了多个 GPT 模型,每个模型有 100 万个参数,用于预测下一个标记,并在代码完成和执行任务中对它们进行了评估。我们的贡献包括:结合软件代码测量方法提出了一种新的代码难度评估指标,研究了课程学习对代码语言模型的有效性,并介绍了一种新的课程学习计划,该计划可提高小型纯解码器语言模型在代码执行任务中的性能。本文的研究成果为更多关于将课程学习用于代码语言模型的研究打开了大门。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Curriculum Learning for Small Code Language Models
Code language models have emerged as useful tools for various programming tasks, yet they often struggle when it comes to complex ones. In this paper, we explore the potential of curriculum learning in enhancing the performance of these models. While prior research has suggested that curriculum learning does not necessarily help in improving the performance of language models, our results surprisingly show that this may not be the case for code language models. We demonstrate that a well-designed curriculum learning approach significantly improves the accuracy of small decoder-only code language models on the task of code execution, while its effect on code completion is less significant. To explore the potential of curriculum learning, we train multiple GPT models with 1 million parameters each to predict the next token and evaluate them on code completion and execution tasks. Our contributions include proposing a novel code difficulty assessment metric by combining software code measures, investigating the effectiveness of Curriculum Learning for code language models, and introducing a Novel Curriculum Learning schedule that enhances the performance of small decoder-only language models in code execution tasks. The results of this paper open the door for more research on the use of curriculum learning for code language models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信