J. Nonaka, Naohisa Sakamoto, Takashi Shimizu, M. Fujita, K. Ono, K. Koyamada
{"title":"Distributed Particle-Based Rendering Framework for Large Data Visualization on HPC Environments","authors":"J. Nonaka, Naohisa Sakamoto, Takashi Shimizu, M. Fujita, K. Ono, K. Koyamada","doi":"10.1109/HPCS.2017.54","DOIUrl":"https://doi.org/10.1109/HPCS.2017.54","url":null,"abstract":"In this paper, we present a distributed data visualization framework for HPC environments based on the PBVR (Particle Based Volume Rendering) method. The PBVR method is a kind of point-based rendering approach where the volumetric data to be visualized is represented as a set of small and opaque particles. This method has the object-space and image-space variants, defined by the place (object or image- space) where the particle data sets are generated. We focused on the object-space approach, which has the advantage when handling large-scale simulation data sets such as those generated by modern HPC systems. In the object-space approach, the particle generation and the subsequent rendering processes can be easily decoupled. In this work, we took advantage of this separability to implement the proposed distributed rendering framework. The particle generation process utilizes the functionalities provided by the KVS (Kyoto Visualization System), and the particle rendering process utilizes the functionalities provided by the HIVE (Heterogeneously Integrated Visual- analytics Environment). The proposed distributed visualization framework is targeted to work also on systems without any hardware graphics acceleration capability, which are commonly found on modern HPC operational environments. We evaluated this PBVR-based distributed visualization infrastructure on the K computer operational environment by utilizing a CPU-only processing server for the particle data generation and rendering. In this preliminary evaluation, using some CFD (Computational Fluid Dynamics) simulation data sets, we obtained encouraging results for pushing further the development in order to make this system available as an effective visualization alternative for the HPC users.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133053313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthias Maiterth, T. Wilde, D. Lowenthal, B. Rountree, M. Schulz, J. Eastep, Dieter Kranzlmiiller
{"title":"Power Aware High Performance Computing: Challenges and Opportunities for Application and System Developers — Survey & Tutorial","authors":"Matthias Maiterth, T. Wilde, D. Lowenthal, B. Rountree, M. Schulz, J. Eastep, Dieter Kranzlmiiller","doi":"10.1109/HPCS.2017.11","DOIUrl":"https://doi.org/10.1109/HPCS.2017.11","url":null,"abstract":"Power and energy consumption are seen of one of the most critical design factor for any next generation large-scale HPC system. The price centers have to pay for energy is shifting the budgets from investment to operating costs, leading to scenarios in which the sizes of systems will be determined by their power needs, rather by the initial hardware cost. As a consequence, virtually all funding agencies for HPC projects around the world have set aggressive goals for peak power requirements in future machines. Yet, with today's HPC architectures and systems, these goals are still far out of reach: they will only be achievable through a complex set of mechanisms at all levels of hardware and software, from buildings and infrastructure to software control and all the way to microarchitectural solutions. All of these mechanisms will ultimately impact the application developer. On future HPC systems, running a code efficiently (as opposed to purely with high performance) will be a major requirement for every user. This work accompanies the tutorial “Power Aware High Performance Computing: Challenges and Opportunities for Application and system Developers” and captures the key aspects discussed. We will review existing literature to discuss the challenges caused by power and energy constraints, present available approaches in hardware and software, highlight impacts on HPC center and infrastructure design as well as operations, and ultimately show how this shift in paradigm from “cycle awareness” to “power awareness” will impact application development.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"290 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115602460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdulrahman Kaitoua, Andrea Gulino, M. Masseroli, Pietro Pinoli, S. Ceri
{"title":"Scalable Genomic Data Management System on the Cloud","authors":"Abdulrahman Kaitoua, Andrea Gulino, M. Masseroli, Pietro Pinoli, S. Ceri","doi":"10.1109/hpcs.2017.19","DOIUrl":"https://doi.org/10.1109/hpcs.2017.19","url":null,"abstract":"Thanks to the huge amount of sequenced data that is becoming available, building scalable solutions for supporting query processing and data analysis over genomics datasets is increasingly important. This paper presents GDMS, a scalable Genomic Data Management System for querying region-based genomic datasets; the focus of the paper is on the deployment of the system on a cluster hosted by CINECA.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122534467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harald Mueller, Spyridon V. Gogouvitis, A. Seitz, B. Brügge
{"title":"Seamless Computing for Industrial Systems Spanning Cloud and Edge","authors":"Harald Mueller, Spyridon V. Gogouvitis, A. Seitz, B. Brügge","doi":"10.1109/HPCS.2017.40","DOIUrl":"https://doi.org/10.1109/HPCS.2017.40","url":null,"abstract":"Industrial systems are more and more dominated by software. In addition, this software is present across several compute domains, from decentralized edge to centralized datacenters and clouds. While the productivity of application lifecycle management has tremendously improved in the area of cloud computing, there is no homogeneous and seamless environment to build, deploy and operate the software across these domains. This leads to a separation, inefficient processes, and duplicate efforts to provide software that runs across the different layers. This paper introduces the concept of seamless computing, which provides a seamless computing environment for multi-domain applications, supporting the mobility of workloads between cloud and edge. The approach is based on transferring established, de-facto standard cloud computing technologies to resource-constrained compute environments in the edge. After defining high-level requirements for seamless computing, a functional reference model is derived, and existing cloud technologies are discussed and selected according to their related capabilities.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129075780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linked Thesauri Quality Assessment and Documentation for Big Data Discovery","authors":"Riccardo Albertoni, M. D. Martino, A. Quarati","doi":"10.1109/HPCS.2017.16","DOIUrl":"https://doi.org/10.1109/HPCS.2017.16","url":null,"abstract":"Thesauri are knowledge systems which may ease Big Data access, fostering their integration and re-use. Currently several Linked Data thesauri covering multi-disciplines are available. They provide a semantic foundation to effectively support cross-organization and cross-disciplinary management and usage of Big Data. Thesauri effectiveness is affected by their quality. Diverse quality measures are available taking into account different facets. However, an overall measure is needed to compare several thesauri and to identify those more qualified for a proper reuse. In this paper, we propose a Multi Criteria Decision Making based methodology for the documentation of the quality assessment of linked thesauri as a whole. We present a proof of concept of the Analytic Hierarchy Process adoption to the set of Linked Data thesauri for the Environment deployed in LusTRE. We discuss the step-by-step practice to document the overall quality measurements, generated by the quality assessment, with the W3C promoted Data Quality Vocabulary.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128762450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward a Model of Emotional Contagion Influence on Agile Development for Mission Critical Systems","authors":"Abdulaziz Alhubaishy, L. Benedicenti","doi":"10.1109/HPCS.2017.86","DOIUrl":"https://doi.org/10.1109/HPCS.2017.86","url":null,"abstract":"This position paper provides a framework to test positive and negative emotional contagion between agile teams for producing mission critical systems in order to enhance agile teams' cooperativeness and performance, lower conflicts, and to make decisions more accurate. Due to human errors in analyzing, designing, implementing, and testing phases for producing mission critical systems, losses and risks are significantly higher than other systems; while the adoption of agile development processes in mission critical systems has shown promising results of minimizing risks and costs. However, agile processes are people-oriented where human error is the main contributor to the success or failure of the system. Within the software industry, studies have investigated developers' conation and cognition to enhance their performance and communication within teams, while the role of affect (emotions and moods) was neglected for decades. Emotional contagion, as a factor of affect influence, has only been tested in studies at the managerial decision-making level; while no evidence of such studies that investigate whether or not the emotional contagion influences behavioural groups in agile developments which is the main concern of this paper.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"22 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124495385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a Verified Parallel Implementation of Frequent Itemset Mining","authors":"C. Whitney, F. Loulergue","doi":"10.1109/HPCS.2017.138","DOIUrl":"https://doi.org/10.1109/HPCS.2017.138","url":null,"abstract":"Information technologies have allowed for the rapid growth of both data acquisition and data storage. With this growth comes the challenge of extracting useful information. One piece of information that is interesting to academics and industry is the relationships between items in a large data set. One approach is to find the relationships between items by calculating how frequently the items appear together in a subset. This is known as the frequent itemset mining problem. The problem goes as follows, given a database with sets of items, find the items that occur frequently together in a subset.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130480939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Root Cause Analysis for Self-Healing in 5G Networks","authors":"Harrison Mfula, J. Nurminen","doi":"10.1109/HPCS.2017.31","DOIUrl":"https://doi.org/10.1109/HPCS.2017.31","url":null,"abstract":"Root cause analysis (RCA) is a common and recurring task performed by operators of cellular networks. It is done mainly to keep customers satisfied with the quality of offered services and to maximize return on investment (ROI) by minimizing and where possible eliminating the root causes of faults in cellular networks. Currently, the actual detection and diagnosis of faults or potential faults is still a manual and slow process often carried out by network experts who manually analyze and correlate various pieces of network data such as, alarms, call traces, configuration management (CM) and key performance indicator (KPI) data in order to come up with the most probable root cause of a given network fault. In this paper, we propose an automated fault detection and diagnosis solution called adaptive root cause analysis (ARCA). The solution uses measurements and other network data together with Bayesian network theory to perform automated evidence based RCA. Compared to the current common practice, our solution is faster due to automation of the entire RCA process. The solution is also cheaper because it needs fewer or no personnel in order to operate and it improves efficiency through domain knowledge reuse during adaptive learning. As it uses a probabilistic Bayesian classifier, it can work with incomplete data and it can handle large datasets with complex probability combinations. Experimental results from stratified synthesized data affirmatively validate the feasibility of using such a solution as a key part of self-healing (SH) especially in emerging self-organizing network (SON) based solutions in LTE Advanced (LTE-A) and 5G.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115383312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Bahler, G. D. Crescenzo, Y. Polyakov, K. Rohloff, D. Cousins
{"title":"Practical Implementation of Lattice-Based Program Obfuscators for Point Functions","authors":"L. Bahler, G. D. Crescenzo, Y. Polyakov, K. Rohloff, D. Cousins","doi":"10.1109/HPCS.2017.115","DOIUrl":"https://doi.org/10.1109/HPCS.2017.115","url":null,"abstract":"Lattice-based cryptography has recently produced several time-efficient cryptosystems that are provably secure under assumptions that are not known to be more easily solvable by quantum computers. An interesting research direction is improving their storage complexity, as current solutions are far from practical with respect to this metric. In this paper we show that program obfuscators for point functions based on lattice theory which are time-efficient, storage-efficient, and provably secure under studied modifications of assumptions commonly studied in lattice-based cryptography (i.e., LWE and LWR assumptions). Point function obfuscators have recently been shown to be the first examples of program obfuscators provable under hardness assumptions commonly used in cryptography. Point functions can be seen as functions that return 1 if the input value is equal to a secret value stored in the program, and 0 otherwise. Notable implementation results due to our design and coding optimizations are: (a) a point function obfuscator based on a modified LWR assumption with running time 0.01s and storage less than 100B, and (b) a point function obfuscator based on modified LWE assumption with running time 0.2s and storage less than 35KB, both using commodity computing resources.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115092232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angelas Mouzakitis, Christian Pinto, Nikolay Nikolaev, A. Rigo, D. Raho, Babis Aronis, M. Marazakis
{"title":"Lightweight and Generic RDMA Engine Para-Virtualization for the KVM Hypervisor","authors":"Angelas Mouzakitis, Christian Pinto, Nikolay Nikolaev, A. Rigo, D. Raho, Babis Aronis, M. Marazakis","doi":"10.1109/HPCS.2017.112","DOIUrl":"https://doi.org/10.1109/HPCS.2017.112","url":null,"abstract":"Remote DMA (RDMA) engines are widely used in clusters/data-centres to improve the performance of data transfers between applications running on different nodes of a computing system. RDMAs are today supported by most network architectures and distributed programming models. However, with the massive usage of virtualization most applications will use RDMAs from virtual machines, and the virtualization of such I/O devices poses several challenges. This paper describes a generic para-virtualization framework based on API Remoting, providing at the same time the flexibility of software based virtualization, and the low overhead of hardware-assisted solutions. The solution presented in this paper is targeting the KVM hypervisor, but is not bound to any target network architecture or specific RDMA engine, thanks to the virtualization at the level of the programming API. In addition, two of the major limitations of para-virtualization are addressed: data sharing between host and guest, and interactions between guests and hypervisor. A set of experimental results showed a near to native performance for the final user of the RDMA (i.e., maximum transfer bandwidth), with a higher overhead only to simulate the API functions used to initialize the RDMA device or allocate/deallocate RDMA buffers.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126352582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}