Ningyuan Xi, Yetao Wu, Kun Fan, Teng Chen, Qingqing Gu, Peng Yu, Jinxian Qu, Chenxi Liu, Zhonglin Jiang, Yong Chen, Luo Ji
{"title":"优化选择附加语言混合比例的 Llama-3 70B 后期培训实践","authors":"Ningyuan Xi, Yetao Wu, Kun Fan, Teng Chen, Qingqing Gu, Peng Yu, Jinxian Qu, Chenxi Liu, Zhonglin Jiang, Yong Chen, Luo Ji","doi":"arxiv-2409.06624","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLM) often needs to be Continual Pre-Trained (CPT) to\nobtain the unfamiliar language skill or adapt into new domains. The huge\ntraining cost of CPT often asks for cautious choice of key hyper-parameters\nsuch as the mixture ratio of extra language or domain corpus. However, there is\nno systematic study which bridge the gap between the optimal mixture ratio and\nthe actual model performance, and the gap between experimental scaling law and\nthe actual deployment in the full model size. In this paper, we perform CPT on\nLlama-3 8B and 70B to enhance its Chinese ability. We study the optimal\ncorrelation between the Additional Language Mixture Ratio (ALMR) and the\nLearning Rate (LR) on the 8B size which directly indicate the optimal\nexperimental set up. By thorough choice of hyper-parameter, and subsequent\nfine-tuning, the model capability is improved not only on the Chinese-related\nbenchmark, but also some specific domains including math, coding and emotional\nintelligence. We deploy the final 70B version of LLM on an real-life chat\nsystem which obtain satisfying performance.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"67 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Practice of Post-Training on Llama-3 70B with Optimal Selection of Additional Language Mixture Ratio\",\"authors\":\"Ningyuan Xi, Yetao Wu, Kun Fan, Teng Chen, Qingqing Gu, Peng Yu, Jinxian Qu, Chenxi Liu, Zhonglin Jiang, Yong Chen, Luo Ji\",\"doi\":\"arxiv-2409.06624\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large Language Models (LLM) often needs to be Continual Pre-Trained (CPT) to\\nobtain the unfamiliar language skill or adapt into new domains. The huge\\ntraining cost of CPT often asks for cautious choice of key hyper-parameters\\nsuch as the mixture ratio of extra language or domain corpus. However, there is\\nno systematic study which bridge the gap between the optimal mixture ratio and\\nthe actual model performance, and the gap between experimental scaling law and\\nthe actual deployment in the full model size. In this paper, we perform CPT on\\nLlama-3 8B and 70B to enhance its Chinese ability. We study the optimal\\ncorrelation between the Additional Language Mixture Ratio (ALMR) and the\\nLearning Rate (LR) on the 8B size which directly indicate the optimal\\nexperimental set up. By thorough choice of hyper-parameter, and subsequent\\nfine-tuning, the model capability is improved not only on the Chinese-related\\nbenchmark, but also some specific domains including math, coding and emotional\\nintelligence. We deploy the final 70B version of LLM on an real-life chat\\nsystem which obtain satisfying performance.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":\"67 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06624\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06624","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Practice of Post-Training on Llama-3 70B with Optimal Selection of Additional Language Mixture Ratio
Large Language Models (LLM) often needs to be Continual Pre-Trained (CPT) to
obtain the unfamiliar language skill or adapt into new domains. The huge
training cost of CPT often asks for cautious choice of key hyper-parameters
such as the mixture ratio of extra language or domain corpus. However, there is
no systematic study which bridge the gap between the optimal mixture ratio and
the actual model performance, and the gap between experimental scaling law and
the actual deployment in the full model size. In this paper, we perform CPT on
Llama-3 8B and 70B to enhance its Chinese ability. We study the optimal
correlation between the Additional Language Mixture Ratio (ALMR) and the
Learning Rate (LR) on the 8B size which directly indicate the optimal
experimental set up. By thorough choice of hyper-parameter, and subsequent
fine-tuning, the model capability is improved not only on the Chinese-related
benchmark, but also some specific domains including math, coding and emotional
intelligence. We deploy the final 70B version of LLM on an real-life chat
system which obtain satisfying performance.