MoA is All You Need: Building LLM Research Team using Mixture of Agents

Sandy Chen, Leqi Zeng, Abhinav Raghunathan, Flora Huang, Terrence C. Kim
{"title":"MoA is All You Need: Building LLM Research Team using Mixture of Agents","authors":"Sandy Chen, Leqi Zeng, Abhinav Raghunathan, Flora Huang, Terrence C. Kim","doi":"arxiv-2409.07487","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) research in the financial domain is particularly\ncomplex due to the sheer number of approaches proposed in literature.\nRetrieval-Augmented Generation (RAG) has emerged as one of the leading methods\nin the sector due to its inherent groundedness and data source variability. In\nthis work, we introduce a RAG framework called Mixture of Agents (MoA) and\ndemonstrate its viability as a practical, customizable, and highly effective\napproach for scaling RAG applications. MoA is essentially a layered network of\nindividually customized small language models (Hoffmann et al., 2022)\ncollaborating to answer questions and extract information. While there are many\ntheoretical propositions for such an architecture and even a few libraries for\ngenerally applying the structure in practice, there are limited documented\nstudies evaluating the potential of this framework considering real business\nconstraints such as cost and speed. We find that the MoA framework, consisting\nof small language models (Hoffmann et al., 2022), produces higher quality and\nmore grounded responses across various financial domains that are core to\nVanguard's business while simultaneously maintaining low costs.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07487","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Large Language Models (LLMs) research in the financial domain is particularly complex due to the sheer number of approaches proposed in literature. Retrieval-Augmented Generation (RAG) has emerged as one of the leading methods in the sector due to its inherent groundedness and data source variability. In this work, we introduce a RAG framework called Mixture of Agents (MoA) and demonstrate its viability as a practical, customizable, and highly effective approach for scaling RAG applications. MoA is essentially a layered network of individually customized small language models (Hoffmann et al., 2022) collaborating to answer questions and extract information. While there are many theoretical propositions for such an architecture and even a few libraries for generally applying the structure in practice, there are limited documented studies evaluating the potential of this framework considering real business constraints such as cost and speed. We find that the MoA framework, consisting of small language models (Hoffmann et al., 2022), produces higher quality and more grounded responses across various financial domains that are core to Vanguard's business while simultaneously maintaining low costs.
MoA 是你所需要的一切:利用混合代理建立法学硕士研究团队
金融领域的大型语言模型(LLMs)研究尤为复杂,因为文献中提出的方法数量众多。检索增强生成(RAG)因其固有的基础性和数据源的可变性,已成为该领域的主要方法之一。在这项工作中,我们介绍了一种名为 "代理混合"(MoA)的 RAG 框架,并展示了它作为一种实用、可定制和高效的 RAG 应用扩展方法的可行性。MoA 本质上是一个由单独定制的小语言模型组成的分层网络(Hoffmann 等人,2022 年),它们相互协作回答问题并提取信息。虽然有很多关于这种架构的理论主张,甚至有一些库可以在实践中应用这种结构,但考虑到成本和速度等实际业务限制因素,对这种框架的潜力进行评估的文献研究非常有限。我们发现,由小语言模型组成的 MoA 框架(Hoffmann 等人,2022 年)能在各种金融领域(Vanguard 的核心业务)中产生更高质量和更接地气的响应,同时还能保持低成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信