CEBench: A Benchmarking Toolkit for the Cost-Effectiveness of LLM Pipelines

Wenbo Sun, Jiaqi Wang, Qiming Guo, Ziyu Li, Wenlu Wang, Rihan Hai
{"title":"CEBench: A Benchmarking Toolkit for the Cost-Effectiveness of LLM Pipelines","authors":"Wenbo Sun, Jiaqi Wang, Qiming Guo, Ziyu Li, Wenlu Wang, Rihan Hai","doi":"arxiv-2407.12797","DOIUrl":null,"url":null,"abstract":"Online Large Language Model (LLM) services such as ChatGPT and Claude 3 have\ntransformed business operations and academic research by effortlessly enabling\nnew opportunities. However, due to data-sharing restrictions, sectors such as\nhealthcare and finance prefer to deploy local LLM applications using costly\nhardware resources. This scenario requires a balance between the effectiveness\nadvantages of LLMs and significant financial burdens. Additionally, the rapid\nevolution of models increases the frequency and redundancy of benchmarking\nefforts. Existing benchmarking toolkits, which typically focus on\neffectiveness, often overlook economic considerations, making their findings\nless applicable to practical scenarios. To address these challenges, we\nintroduce CEBench, an open-source toolkit specifically designed for\nmulti-objective benchmarking that focuses on the critical trade-offs between\nexpenditure and effectiveness required for LLM deployments. CEBench allows for\neasy modifications through configuration files, enabling stakeholders to\neffectively assess and optimize these trade-offs. This strategic capability\nsupports crucial decision-making processes aimed at maximizing effectiveness\nwhile minimizing cost impacts. By streamlining the evaluation process and\nemphasizing cost-effectiveness, CEBench seeks to facilitate the development of\neconomically viable AI solutions across various industries and research fields.\nThe code and demonstration are available in\n\\url{https://github.com/amademicnoboday12/CEBench}.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"69 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.12797","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Online Large Language Model (LLM) services such as ChatGPT and Claude 3 have transformed business operations and academic research by effortlessly enabling new opportunities. However, due to data-sharing restrictions, sectors such as healthcare and finance prefer to deploy local LLM applications using costly hardware resources. This scenario requires a balance between the effectiveness advantages of LLMs and significant financial burdens. Additionally, the rapid evolution of models increases the frequency and redundancy of benchmarking efforts. Existing benchmarking toolkits, which typically focus on effectiveness, often overlook economic considerations, making their findings less applicable to practical scenarios. To address these challenges, we introduce CEBench, an open-source toolkit specifically designed for multi-objective benchmarking that focuses on the critical trade-offs between expenditure and effectiveness required for LLM deployments. CEBench allows for easy modifications through configuration files, enabling stakeholders to effectively assess and optimize these trade-offs. This strategic capability supports crucial decision-making processes aimed at maximizing effectiveness while minimizing cost impacts. By streamlining the evaluation process and emphasizing cost-effectiveness, CEBench seeks to facilitate the development of economically viable AI solutions across various industries and research fields. The code and demonstration are available in \url{https://github.com/amademicnoboday12/CEBench}.
CEBench:LLM 管道成本效益基准工具包
在线大语言模型(LLM)服务,如 ChatGPT 和 Claude 3,通过毫不费力地创造新机会,改变了商业运作和学术研究。然而,由于数据共享的限制,医疗保健和金融等行业更倾向于使用昂贵的硬件资源部署本地 LLM 应用程序。这种情况要求在 LLM 的有效性优势和巨大的财务负担之间取得平衡。此外,模型的快速发展增加了基准测试的频率和冗余度。现有的基准工具包通常只关注有效性,却往往忽略了经济因素,导致其研究结果无法应用于实际场景。为了应对这些挑战,我们引入了 CEBench,这是一个专门为多目标基准测试而设计的开源工具包,重点关注 LLM 部署所需的支出与效果之间的关键权衡。CEBench 允许通过配置文件方便地进行修改,使利益相关者能够有效地评估和优化这些权衡。这种战略能力支持重要的决策过程,旨在最大限度地提高效率,同时最大限度地降低成本影响。通过简化评估过程和强调成本效益,CEBench 致力于促进各行业和研究领域开发经济可行的人工智能解决方案。代码和演示可在(url{https://github.com/amademicnoboday12/CEBench}.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信