{"title":"Collating time-series resource data for system-wide job profiling","authors":"V. Bumgardner, V. Marek, Ray L. Hyatt","doi":"10.1109/NOMS.2016.7502958","DOIUrl":null,"url":null,"abstract":"Through the collection and association of discrete time-series resource metrics and workloads, we can both provide benchmark and intra-job resource collations, along with system-wide job profiling. Traditional RDBMSes are not designed to store and process long-term discrete time-series metrics and the commonly used resolution-reducing round robin databases (RRDB), make poor long-term sources of data for workload analytics. We implemented a system that employs “Big-data” (Hadoop/HBase) and other analytics (R) techniques and tools to store, process, and characterize HPC workloads. Using this system we have collected and processed over a 30 billion time-series metrics from existing short-term high-resolution (15-sec RRDB) sources, profiling over 200 thousand jobs across a wide spectrum of workloads. The system is currently in use at the University of Kentucky for better understanding of individual jobs and system-wide profiling as well as a strategic source of data for resource allocation and future acquisitions.","PeriodicalId":344879,"journal":{"name":"NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium","volume":"173 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NOMS.2016.7502958","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Through the collection and association of discrete time-series resource metrics and workloads, we can both provide benchmark and intra-job resource collations, along with system-wide job profiling. Traditional RDBMSes are not designed to store and process long-term discrete time-series metrics and the commonly used resolution-reducing round robin databases (RRDB), make poor long-term sources of data for workload analytics. We implemented a system that employs “Big-data” (Hadoop/HBase) and other analytics (R) techniques and tools to store, process, and characterize HPC workloads. Using this system we have collected and processed over a 30 billion time-series metrics from existing short-term high-resolution (15-sec RRDB) sources, profiling over 200 thousand jobs across a wide spectrum of workloads. The system is currently in use at the University of Kentucky for better understanding of individual jobs and system-wide profiling as well as a strategic source of data for resource allocation and future acquisitions.