Experiences on Clustering High-Dimensional Data using pbdR

Sadika Amreen, A. Mockus
{"title":"Experiences on Clustering High-Dimensional Data using pbdR","authors":"Sadika Amreen, A. Mockus","doi":"10.1145/3144763.3144768","DOIUrl":null,"url":null,"abstract":"Motivation: Software engineering for High Performace Computing (HPC) environments in general [1] and for big data in particular [5] faces a set of unique challenges including high complexity of middleware and of computing environments. Tools that make it easier for scientists to utilize HPC are, therefore, of paramount importance. We provide an experience report of using one of such highly effective middleware pbdR [9] that allow the scientist to use R programming language without, at least nominally, having to master many layers of HPC infrastructure, such as OpenMPI [4] and ScalaPACK [2]. Objective: to evaluate the extent to which middleware helps improve scientist productivity, we use pbdR to solve a real problem that we, as scientists, are investigating. Our big data comes from the commits on GitHub and other project hosting sites and we are trying to cluster developers based on the text of these commit messages. Context: We need to be able to identify developer for every commit and to identify commits for a single developer. Developer identifiers in the commits, such as login, email, and name are often spelled in multiple ways since that information may come from different version control systems (Git, Mercurial, SVN, ...) and may depend on which computer is used (what is specified in .git/config of the home folder). Method: We train Doc2Vec [7] model where existing credentials are used as a document identifier and then use the resulting 200-dimensional vectors for the 2.3M identifiers to cluster these identifiers so that each cluster represents a specific individual. The distance matrix occupies 32TB and, therefore, is a good target for HPC in general and pbdR in particular. pbdR allows data to be distributed over computing nodes and even has implemented K-means and mixture-model clustering techniques in the package pmclust. Results: We used strategic prototyping [3] to evaluate the capabilities of pbdR and discovered that a) the use of middleware required extensive understanding of its inner workings thus negating many of the expected benefits; b) the implemented algorithms were not suitable for the particular combination of n, p, and k (sample size, data dimension, and the number of clusters); c) the development environment based on batch jobs increases development time substantially. Conclusions: In addition to finding from Basili et al., we find that the quality of the implementation of HPC infrastructure and its development environment has a tremendous effect on development productivity.","PeriodicalId":297626,"journal":{"name":"Proceedings of the 1st International Workshop on Software Engineering for High Performance Computing in Computational and Data-enabled Science & Engineering","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Software Engineering for High Performance Computing in Computational and Data-enabled Science & Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3144763.3144768","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Motivation: Software engineering for High Performace Computing (HPC) environments in general [1] and for big data in particular [5] faces a set of unique challenges including high complexity of middleware and of computing environments. Tools that make it easier for scientists to utilize HPC are, therefore, of paramount importance. We provide an experience report of using one of such highly effective middleware pbdR [9] that allow the scientist to use R programming language without, at least nominally, having to master many layers of HPC infrastructure, such as OpenMPI [4] and ScalaPACK [2]. Objective: to evaluate the extent to which middleware helps improve scientist productivity, we use pbdR to solve a real problem that we, as scientists, are investigating. Our big data comes from the commits on GitHub and other project hosting sites and we are trying to cluster developers based on the text of these commit messages. Context: We need to be able to identify developer for every commit and to identify commits for a single developer. Developer identifiers in the commits, such as login, email, and name are often spelled in multiple ways since that information may come from different version control systems (Git, Mercurial, SVN, ...) and may depend on which computer is used (what is specified in .git/config of the home folder). Method: We train Doc2Vec [7] model where existing credentials are used as a document identifier and then use the resulting 200-dimensional vectors for the 2.3M identifiers to cluster these identifiers so that each cluster represents a specific individual. The distance matrix occupies 32TB and, therefore, is a good target for HPC in general and pbdR in particular. pbdR allows data to be distributed over computing nodes and even has implemented K-means and mixture-model clustering techniques in the package pmclust. Results: We used strategic prototyping [3] to evaluate the capabilities of pbdR and discovered that a) the use of middleware required extensive understanding of its inner workings thus negating many of the expected benefits; b) the implemented algorithms were not suitable for the particular combination of n, p, and k (sample size, data dimension, and the number of clusters); c) the development environment based on batch jobs increases development time substantially. Conclusions: In addition to finding from Basili et al., we find that the quality of the implementation of HPC infrastructure and its development environment has a tremendous effect on development productivity.
基于pbdR的高维数据聚类研究
动机:一般来说,高性能计算(HPC)环境的软件工程[1],特别是大数据[5],面临着一系列独特的挑战,包括中间件和计算环境的高度复杂性。因此,使科学家更容易利用高性能计算的工具是至关重要的。我们提供了一份使用这种高效中间件pbdR[9]的经验报告,它允许科学家使用R编程语言,至少在名义上,不必掌握多层HPC基础设施,如OpenMPI[4]和ScalaPACK[2]。目的:为了评估中间件在多大程度上有助于提高科学家的工作效率,我们使用pbdR来解决我们作为科学家正在研究的一个实际问题。我们的大数据来自GitHub和其他项目托管网站上的提交,我们正在尝试根据这些提交消息的文本对开发人员进行集群。上下文:我们需要能够为每个提交识别开发人员,并为单个开发人员识别提交。提交中的开发人员标识符,如登录名、电子邮件和姓名通常以多种方式拼写,因为这些信息可能来自不同的版本控制系统(Git、Mercurial、SVN等),并且可能取决于所使用的计算机(在主文件夹的. Git /config中指定)。方法:我们训练Doc2Vec[7]模型,其中使用现有凭据作为文档标识符,然后使用得到的2.3M标识符的200维向量对这些标识符进行聚类,以便每个聚类代表一个特定的个体。距离矩阵占用32TB,因此,一般来说,它是HPC的一个很好的目标,特别是pbdR。pbdR允许数据分布在计算节点上,甚至在包pmcluster中实现了K-means和混合模型聚类技术。结果:我们使用战略原型[3]来评估pbdR的能力,并发现a)中间件的使用需要对其内部工作原理进行广泛的理解,从而否定了许多预期的好处;B)实现的算法不适合n、p和k(样本量、数据维度和聚类数量)的特定组合;C)基于批处理作业的开发环境大大增加了开发时间。结论:除了Basili等人的发现外,我们还发现HPC基础设施的实施质量及其开发环境对开发生产力有巨大影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信