{"title":"Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training","authors":"Masanori Hirano, Kentaro Imajo","doi":"arxiv-2404.10555","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) are now widely used in various fields, including\nfinance. However, Japanese financial-specific LLMs have not been proposed yet.\nHence, this study aims to construct a Japanese financial-specific LLM through\ncontinual pre-training. Before tuning, we constructed Japanese\nfinancial-focused datasets for continual pre-training. As a base model, we\nemployed a Japanese LLM that achieved state-of-the-art performance on Japanese\nfinancial benchmarks among the 10-billion-class parameter models. After\ncontinual pre-training using the datasets and the base model, the tuned model\nperformed better than the original model on the Japanese financial benchmarks.\nMoreover, the outputs comparison results reveal that the tuned model's outputs\ntend to be better than the original model's outputs in terms of the quality and\nlength of the answers. These findings indicate that domain-specific continual\npre-training is also effective for LLMs. The tuned model is publicly available\non Hugging Face.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"214 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2404.10555","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models (LLMs) are now widely used in various fields, including
finance. However, Japanese financial-specific LLMs have not been proposed yet.
Hence, this study aims to construct a Japanese financial-specific LLM through
continual pre-training. Before tuning, we constructed Japanese
financial-focused datasets for continual pre-training. As a base model, we
employed a Japanese LLM that achieved state-of-the-art performance on Japanese
financial benchmarks among the 10-billion-class parameter models. After
continual pre-training using the datasets and the base model, the tuned model
performed better than the original model on the Japanese financial benchmarks.
Moreover, the outputs comparison results reveal that the tuned model's outputs
tend to be better than the original model's outputs in terms of the quality and
length of the answers. These findings indicate that domain-specific continual
pre-training is also effective for LLMs. The tuned model is publicly available
on Hugging Face.