容错支持向量机

Sameh M. Shohdy, Abhinav Vishnu, G. Agrawal
{"title":"容错支持向量机","authors":"Sameh M. Shohdy, Abhinav Vishnu, G. Agrawal","doi":"10.1109/ICPP.2016.75","DOIUrl":null,"url":null,"abstract":"Support Vector Machines (SVM) is a popular Machine Learning algorithm, which is used for building classifiers and models. Parallel implementations of SVM, which can run on large scale supercomputers, are becoming commonplace. However, these supercomputers -- designed under constraints of data movement -- frequently observe faults in compute devices. Many device faults manifest as permanent process/node failures. In this paper, we present several approaches for designing fault tolerant SVM algorithms. First, we present an in-depth analysis to identify the critical data structures, and build baseline algorithms that simply periodically checkpoint these data structures. Next, we propose a novel algorithm, which requires no inter-node data movement for checkpointing, and only O(n2/p2) recovery time -- a small fraction of the expected O(n3/p) time-complexity of SVM. We implement these algorithms and evaluate them on a large scale cluster. Our evaluation indicates that the overall data movement for checkpointing in the baseline algorithm can be up to 100x the dataset size!, while the proposed novel algorithm is completely communication-free of checkpointing. In addition, it saves up to 20x space, while providing better (by an average of 5.5x speedup on 256 cores) recovery time than the baseline algorithm with different number of checkpoints. The experiments also show that our communication avoiding algorithm outperforms Spark MLLib SVM implementation by an average of 6.4x with 256 cores in the case of failure.","PeriodicalId":409991,"journal":{"name":"2016 45th International Conference on Parallel Processing (ICPP)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Fault Tolerant Support Vector Machines\",\"authors\":\"Sameh M. Shohdy, Abhinav Vishnu, G. Agrawal\",\"doi\":\"10.1109/ICPP.2016.75\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Support Vector Machines (SVM) is a popular Machine Learning algorithm, which is used for building classifiers and models. Parallel implementations of SVM, which can run on large scale supercomputers, are becoming commonplace. However, these supercomputers -- designed under constraints of data movement -- frequently observe faults in compute devices. Many device faults manifest as permanent process/node failures. In this paper, we present several approaches for designing fault tolerant SVM algorithms. First, we present an in-depth analysis to identify the critical data structures, and build baseline algorithms that simply periodically checkpoint these data structures. Next, we propose a novel algorithm, which requires no inter-node data movement for checkpointing, and only O(n2/p2) recovery time -- a small fraction of the expected O(n3/p) time-complexity of SVM. We implement these algorithms and evaluate them on a large scale cluster. Our evaluation indicates that the overall data movement for checkpointing in the baseline algorithm can be up to 100x the dataset size!, while the proposed novel algorithm is completely communication-free of checkpointing. In addition, it saves up to 20x space, while providing better (by an average of 5.5x speedup on 256 cores) recovery time than the baseline algorithm with different number of checkpoints. The experiments also show that our communication avoiding algorithm outperforms Spark MLLib SVM implementation by an average of 6.4x with 256 cores in the case of failure.\",\"PeriodicalId\":409991,\"journal\":{\"name\":\"2016 45th International Conference on Parallel Processing (ICPP)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 45th International Conference on Parallel Processing (ICPP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPP.2016.75\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 45th International Conference on Parallel Processing (ICPP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPP.2016.75","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

支持向量机(SVM)是一种流行的机器学习算法,用于构建分类器和模型。支持向量机的并行实现可以在大型超级计算机上运行,正变得越来越普遍。然而,这些在数据移动约束下设计的超级计算机经常观察到计算设备的故障。许多设备故障表现为永久的进程/节点故障。本文提出了几种设计容错支持向量机算法的方法。首先,我们进行了深入的分析,以确定关键数据结构,并构建基线算法,简单地定期检查这些数据结构。接下来,我们提出了一种新的算法,它不需要节点间的数据移动来进行检查点,只有O(n2/p2)的恢复时间——这是支持向量机预期的O(n3/p)时间复杂度的一小部分。我们实现了这些算法,并在一个大规模的集群上对它们进行了评估。我们的评估表明,基线算法中检查点的整体数据移动可以达到数据集大小的100倍!,而提出的新算法是完全无通信的检查点。此外,它可以节省高达20倍的空间,同时提供比具有不同检查点数量的基线算法更好的恢复时间(256核时平均加速5.5倍)。实验还表明,在256核故障情况下,我们的通信避免算法比Spark MLLib SVM实现平均高出6.4倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fault Tolerant Support Vector Machines
Support Vector Machines (SVM) is a popular Machine Learning algorithm, which is used for building classifiers and models. Parallel implementations of SVM, which can run on large scale supercomputers, are becoming commonplace. However, these supercomputers -- designed under constraints of data movement -- frequently observe faults in compute devices. Many device faults manifest as permanent process/node failures. In this paper, we present several approaches for designing fault tolerant SVM algorithms. First, we present an in-depth analysis to identify the critical data structures, and build baseline algorithms that simply periodically checkpoint these data structures. Next, we propose a novel algorithm, which requires no inter-node data movement for checkpointing, and only O(n2/p2) recovery time -- a small fraction of the expected O(n3/p) time-complexity of SVM. We implement these algorithms and evaluate them on a large scale cluster. Our evaluation indicates that the overall data movement for checkpointing in the baseline algorithm can be up to 100x the dataset size!, while the proposed novel algorithm is completely communication-free of checkpointing. In addition, it saves up to 20x space, while providing better (by an average of 5.5x speedup on 256 cores) recovery time than the baseline algorithm with different number of checkpoints. The experiments also show that our communication avoiding algorithm outperforms Spark MLLib SVM implementation by an average of 6.4x with 256 cores in the case of failure.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信