Experience Paper: Towards enhancing cost efficiency in serverless machine learning training

Marc Sánchez Artigas, Pablo Gimeno Sarroca
{"title":"Experience Paper: Towards enhancing cost efficiency in serverless machine learning training","authors":"Marc Sánchez Artigas, Pablo Gimeno Sarroca","doi":"10.1145/3464298.3494884","DOIUrl":null,"url":null,"abstract":"Function-as-a-Service (FaaS) has raised a growing interest in how to \"tame\" serverless to enable domain-specific use cases such as data-intensive applications and machine learning (ML), to name a few. Recently, several systems have been implemented for training ML models. Certainly, these research articles are significant steps in the correct direction. However, they do not completely answer the nagging question of when serverless ML training can be more cost-effective compared to traditional \"serverful\" computing. To help in this task, we propose MLLess, a FaaS-based ML training prototype built atop IBM Cloud Functions. To boost cost-efficiency, MLLess implements two key optimizations: a significance filter and a scale-in auto-tuner, and leverages them to specialize model training to the FaaS model. Our results certify that MLLess can be 15X faster than serverful ML systems [24] at a lower cost for ML models (such as sparse logistic regression and matrix factorization) that exhibit fast convergence.","PeriodicalId":154994,"journal":{"name":"Proceedings of the 22nd International Middleware Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 22nd International Middleware Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3464298.3494884","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

Function-as-a-Service (FaaS) has raised a growing interest in how to "tame" serverless to enable domain-specific use cases such as data-intensive applications and machine learning (ML), to name a few. Recently, several systems have been implemented for training ML models. Certainly, these research articles are significant steps in the correct direction. However, they do not completely answer the nagging question of when serverless ML training can be more cost-effective compared to traditional "serverful" computing. To help in this task, we propose MLLess, a FaaS-based ML training prototype built atop IBM Cloud Functions. To boost cost-efficiency, MLLess implements two key optimizations: a significance filter and a scale-in auto-tuner, and leverages them to specialize model training to the FaaS model. Our results certify that MLLess can be 15X faster than serverful ML systems [24] at a lower cost for ML models (such as sparse logistic regression and matrix factorization) that exhibit fast convergence.
经验论文:提高无服务器机器学习训练的成本效率
功能即服务(FaaS)引起了人们对如何“驯服”无服务器以实现特定领域用例(如数据密集型应用程序和机器学习(ML))的兴趣。最近,已经实现了几个用于训练ML模型的系统。当然,这些研究文章是朝着正确方向迈出的重要一步。然而,它们并没有完全回答一个恼人的问题,即与传统的“有服务器的”计算相比,无服务器的ML训练何时更具成本效益。为了帮助完成这项任务,我们提出了MLLess,这是一个基于faas的机器学习训练原型,构建在IBM Cloud Functions之上。为了提高成本效率,MLLess实现了两个关键的优化:重要性过滤器和缩放自动调谐器,并利用它们专门针对FaaS模型进行模型训练。我们的结果证明,对于表现出快速收敛的ML模型(如稀疏逻辑回归和矩阵分解),MLLess可以比serverful ML系统[24]快15倍,成本更低。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信