2017 IEEE International Conference on Smart Computing (SMARTCOMP)最新文献

筛选
英文 中文
Towards Distributed Machine Learning in Shared Clusters: A Dynamically-Partitioned Approach 面向共享集群的分布式机器学习:一种动态分区方法
2017 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2017-04-22 DOI: 10.1109/SMARTCOMP.2017.7947053
Peng Sun, Yonggang Wen, T. Duong, Shengen Yan
{"title":"Towards Distributed Machine Learning in Shared Clusters: A Dynamically-Partitioned Approach","authors":"Peng Sun, Yonggang Wen, T. Duong, Shengen Yan","doi":"10.1109/SMARTCOMP.2017.7947053","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2017.7947053","url":null,"abstract":"Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5% in most cases.","PeriodicalId":193593,"journal":{"name":"2017 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131764798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
CRNN: A Joint Neural Network for Redundancy Detection 一种用于冗余检测的联合神经网络
2017 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2016-04-27 DOI: 10.2139/ssrn.2962196
Xinyu Fu, Eugene Ch’ng, U. Aickelin, S. See
{"title":"CRNN: A Joint Neural Network for Redundancy Detection","authors":"Xinyu Fu, Eugene Ch’ng, U. Aickelin, S. See","doi":"10.2139/ssrn.2962196","DOIUrl":"https://doi.org/10.2139/ssrn.2962196","url":null,"abstract":"This paper proposes a novel framework for detecting redundancy in supervised sentence categorisation. Unlike traditional singleton neural network, our model incorporates character- aware convolutional neural network (Char-CNN) with character-aware recurrent neural network (Char-RNN) to form a convolutional recurrent neural network (CRNN). Our model benefits from Char-CNN in that only salient features are selected and fed into the integrated Char-RNN. Char-RNN effectively learns long sequence semantics via sophisticated update mechanism. We compare our framework against the state-of-the- art text classification algorithms on four popular benchmarking corpus. For instance, our model achieves competing precision rate, recall ratio, and F1 score on the Google-news data-set. For twenty- news-groups data stream, our algorithm obtains the optimum on precision rate, recall ratio, and F1 score. For Brown Corpus, our framework obtains the best F1 score and almost equivalent precision rate and recall ratio over the top competitor. For the question classification collection, CRNN produces the optimal recall rate and F1 score and comparable precision rate. We also analyse three different RNN hidden recurrent cells' impact on performance and their runtime efficiency. We observe that MGU achieves the optimal runtime and comparable performance against GRU and LSTM. For TFIDF based algorithms, we experiment with word2vec, GloVe, and sent2vec embeddings and report their performance differences.","PeriodicalId":193593,"journal":{"name":"2017 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127926234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信