Nicholas Kluge Corrêa , Sophia Falk , Shiza Fatimah , Aniket Sen , Nythamar De Oliveira
{"title":"TeenyTinyLlama:以巴西葡萄牙语训练的开源微小语言模型","authors":"Nicholas Kluge Corrêa , Sophia Falk , Shiza Fatimah , Aniket Sen , Nythamar De Oliveira","doi":"10.1016/j.mlwa.2024.100558","DOIUrl":null,"url":null,"abstract":"<div><p>Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the <em>TeenyTinyLlama</em> pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on <span>GitHub</span><svg><path></path></svg> and <span>Hugging Face</span><svg><path></path></svg> for community use and further development.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100558"},"PeriodicalIF":0.0000,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000343/pdfft?md5=ca3df301a069c8298b65dcd69855e4ac&pid=1-s2.0-S2666827024000343-main.pdf","citationCount":"0","resultStr":"{\"title\":\"TeenyTinyLlama: Open-source tiny language models trained in Brazilian Portuguese\",\"authors\":\"Nicholas Kluge Corrêa , Sophia Falk , Shiza Fatimah , Aniket Sen , Nythamar De Oliveira\",\"doi\":\"10.1016/j.mlwa.2024.100558\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the <em>TeenyTinyLlama</em> pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on <span>GitHub</span><svg><path></path></svg> and <span>Hugging Face</span><svg><path></path></svg> for community use and further development.</p></div>\",\"PeriodicalId\":74093,\"journal\":{\"name\":\"Machine learning with applications\",\"volume\":\"16 \",\"pages\":\"Article 100558\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666827024000343/pdfft?md5=ca3df301a069c8298b65dcd69855e4ac&pid=1-s2.0-S2666827024000343-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine learning with applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666827024000343\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning with applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666827024000343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
TeenyTinyLlama: Open-source tiny language models trained in Brazilian Portuguese
Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the TeenyTinyLlama pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on GitHub and Hugging Face for community use and further development.