Training and Serving System of Foundation Models: A Comprehensive Survey

Jiahang Zhou;Yanyu Chen;Zicong Hong;Wuhui Chen;Yue Yu;Tao Zhang;Hui Wang;Chuanfu Zhang;Zibin Zheng
{"title":"Training and Serving System of Foundation Models: A Comprehensive Survey","authors":"Jiahang Zhou;Yanyu Chen;Zicong Hong;Wuhui Chen;Yue Yu;Tao Zhang;Hui Wang;Chuanfu Zhang;Zibin Zheng","doi":"10.1109/OJCS.2024.3380828","DOIUrl":null,"url":null,"abstract":"Foundation models (e.g., ChatGPT, DALL-E, PengCheng Mind, PanGu-\n<inline-formula><tex-math>$\\Sigma$</tex-math></inline-formula>\n) have demonstrated extraordinary performance in key technological areas, such as natural language processing and visual recognition, and have become the mainstream trend of artificial general intelligence. This has led more and more major technology giants to dedicate significant human and financial resources to actively develop their foundation model systems, which drives continuous growth of these models' parameters. As a result, the training and serving of these models have posed significant challenges, including substantial computing power, memory consumption, bandwidth demands, etc. Therefore, employing efficient training and serving strategies becomes particularly crucial. Many researchers have actively explored and proposed effective methods. So, a comprehensive survey of them is essential for system developers and researchers. This paper extensively explores the methods employed in training and serving foundation models from various perspectives. It provides a detailed categorization of these state-of-the-art methods, including finer aspects such as network, computing, and storage. Additionally, the paper summarizes the challenges and presents a perspective on the future development direction of foundation model systems. Through comprehensive discussion and analysis, it hopes to provide a solid theoretical basis and practical guidance for future research and applications, promoting continuous innovation and development in foundation model systems.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"5 ","pages":"107-119"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10478189","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10478189/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Foundation models (e.g., ChatGPT, DALL-E, PengCheng Mind, PanGu- $\Sigma$ ) have demonstrated extraordinary performance in key technological areas, such as natural language processing and visual recognition, and have become the mainstream trend of artificial general intelligence. This has led more and more major technology giants to dedicate significant human and financial resources to actively develop their foundation model systems, which drives continuous growth of these models' parameters. As a result, the training and serving of these models have posed significant challenges, including substantial computing power, memory consumption, bandwidth demands, etc. Therefore, employing efficient training and serving strategies becomes particularly crucial. Many researchers have actively explored and proposed effective methods. So, a comprehensive survey of them is essential for system developers and researchers. This paper extensively explores the methods employed in training and serving foundation models from various perspectives. It provides a detailed categorization of these state-of-the-art methods, including finer aspects such as network, computing, and storage. Additionally, the paper summarizes the challenges and presents a perspective on the future development direction of foundation model systems. Through comprehensive discussion and analysis, it hopes to provide a solid theoretical basis and practical guidance for future research and applications, promoting continuous innovation and development in foundation model systems.
基础模型培训和服务系统:全面调查
基础模型(如 ChatGPT、DALL-E、PengCheng Mind、PanGu-$\Sigma$)在自然语言处理、视觉识别等关键技术领域表现出非凡的性能,已成为人工通用智能的主流趋势。这促使越来越多的大型科技巨头投入大量人力和财力,积极开发自己的基础模型系统,推动这些模型参数的不断增长。因此,这些模型的训练和服务面临着巨大的挑战,包括大量的计算能力、内存消耗和带宽需求等。因此,采用高效的训练和服务策略变得尤为重要。许多研究人员积极探索并提出了有效的方法。因此,对于系统开发人员和研究人员来说,对这些方法进行全面调查至关重要。本文从多个角度广泛探讨了基础模型的训练和服务方法。它对这些最先进的方法进行了详细分类,包括网络、计算和存储等更精细的方面。此外,本文还总结了基础模型系统所面临的挑战,并对其未来的发展方向进行了展望。希望通过全面的讨论和分析,为未来的研究和应用提供坚实的理论基础和实践指导,推动地基模型系统的不断创新和发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
12.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信