2017 International Conference on High Performance Computing & Simulation (HPCS)最新文献

筛选
英文 中文
A Hybrid Parallel Algorithm for Solving Eeuler Equation Using Explicit RKDG Method Based on OpenFOAM 基于OpenFOAM的显式RKDG法求解Eeuler方程的混合并行算法
S. Ye, Xiaoguang Ren, Yuhua Tang, Liyang Xu, Hao Li, Chao Li, Yufei Lin
{"title":"A Hybrid Parallel Algorithm for Solving Eeuler Equation Using Explicit RKDG Method Based on OpenFOAM","authors":"S. Ye, Xiaoguang Ren, Yuhua Tang, Liyang Xu, Hao Li, Chao Li, Yufei Lin","doi":"10.1109/HPCS.2017.99","DOIUrl":"https://doi.org/10.1109/HPCS.2017.99","url":null,"abstract":"OpenFOAM is a framework of the open source C CFD toolbox for flexible engineering simulation, which uses finite volume method (FVM) in the discretization of partial differential equations (PDEs). The problem solving procedure in OpenFOAM consists in equations dicretization stage, equations solving stage and field limiting stage. The best parallelism is limited by the equation solving stage, which contains communications. Compared to FVM, discontinuous Galerkin (DG) method is a high-order discretization method, which can accelerate the convergence of the residuals over same mesh scale and has higher resolution of the flow. Based on OpenFOAM with DG method, the ratio of overhead in equations discretization stage increases, especially when solving Euler equations using an explicit method. The equations discretization stage has a better potential parallelism than the other two stages due to no existence of communication. In this paper, we will analysis the difference of time cost in these three stages between original OpenFOAM and OpenFOAM with DG method. By decoupling these three stages, a hybrid parallel algorithm for solving PDEs is proposed and implemented based on OpenFOAM with DG method. The experimental results show that the simulation time is reduced by 16%, and the relative speedup of the hybrid parallel algorithm is up to 2.88 compared to the original parallel algorithm with the same degree of parallelism.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124791667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Resource Selection in Cloud Service Broker 云服务代理中的动态资源选择
G. Z. Santoso, Young-Woo Jung, Seong-Woo Seok, E. Carlini, Patrizio Dazzi, J. Altmann, John Violos, Jamie Marshall
{"title":"Dynamic Resource Selection in Cloud Service Broker","authors":"G. Z. Santoso, Young-Woo Jung, Seong-Woo Seok, E. Carlini, Patrizio Dazzi, J. Altmann, John Violos, Jamie Marshall","doi":"10.1109/HPCS.2017.43","DOIUrl":"https://doi.org/10.1109/HPCS.2017.43","url":null,"abstract":"Cloud Service Broker federates multiple Cloud Service Providers into a single entity to customers. The benefits that can be enjoyed by Cloud Service Consumer are flexibility, ease of use, and reduced cost. However, because of the unique properties and configurations of each cloud provider, sometime it's not easy to migrate between one cloud provider to another. Furthermore, the advantage of using broker should be obtained by consumers in any life cycle of consumer's software not only during the deployment of the software. This paper outlines the main idea and design of the dynamic resource selection in BASMATI Cloud Federation.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127067778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A Topology-Adaptive Strategy for Graph Traversing 图遍历的拓扑自适应策略
Jia Meng, Liang Cao, Huashan Yu
{"title":"A Topology-Adaptive Strategy for Graph Traversing","authors":"Jia Meng, Liang Cao, Huashan Yu","doi":"10.1109/HPCS.2017.60","DOIUrl":"https://doi.org/10.1109/HPCS.2017.60","url":null,"abstract":"Graphs are a key form of Big Data. Although graph computing technology has been studied extensively in recent years, it remains a grand challenge to process large-scale graphs efficiently. Computation on a graph is to propagate and update the vertex values systematically. Both its complexity and parallelism are affected mainly by the algorithm's value propagating pattern. Efficient graph computing depends on techniques compatible with the algorithm's value propagating pattern. Graph traversing is a value propagating pattern used by representative graph applications. This paper presents an efficient value propagating framework for large-scale graph traversing applications. By partitioning the input graph based on the topology, it allows values for different source vertices to be propagated together, so as to reduce value propagating overhead. To improve the parallel efficiency of graph traversals, a novel task scheduling mechanism has been devised. The mechanism allows the framework to improve load balance without loss of locality. A prototype for the framework has been implemented. We evaluated the prototype with a set of typical real-world and synthetic graphs. By comparing with the owner-computing rule, experimental results show that this work has an overall speedup from 1.23 to 3.97. The speedup to Ligra is from 4.7 to 20.7.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128962474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimizing Distribution and Data Loading Overheads in Parallel Training of DNN Acoustic Models with Frequent Parameter Averaging 基于频繁参数平均的DNN声学模型并行训练中的最小分布和数据负载开销
P. Rosciszewski, Jakub Kaliski
{"title":"Minimizing Distribution and Data Loading Overheads in Parallel Training of DNN Acoustic Models with Frequent Parameter Averaging","authors":"P. Rosciszewski, Jakub Kaliski","doi":"10.1109/HPCS.2017.89","DOIUrl":"https://doi.org/10.1109/HPCS.2017.89","url":null,"abstract":"In the paper we investigate the performance of parallel deep neural network training with parameter averaging for acoustic modeling in Kaldi, a popular automatic speech recognition toolkit. We describe experiments based on training a recurrent neural network with 4 layers of 800 LSTM hidden states on a 100-hour corpora of annotated Polish speech data. We propose a MPI-based modification of the training program which minimizes the overheads of both distributing training jobs and loading and preprocessing training data by using message passing and CPU/GPU computation overlapping. The impact of the proposed optimizations is greater for the more frequent neural network model averaging. To justify our efforts, we examine the influence of averaging frequency on the trained model efficiency. We plot learning curves based on the average log-probability per frame of correct paths for utterances in the validation set, as well as word error rates of test set decodings. Based on experiments with training on 2 workstations with 4 GPUs each we point that for the given network architecture, dataset and computing environment there is a certain range of averaging frequencies that are optimal for the model efficiency. For the selected averaging frequency of 600k frames per iteration the proposed optimizations reduce the training time by 54.9%.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128820045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Parallelization of Large-Scale Drug-Protein Binding Experiments 大规模药物-蛋白质结合实验的并行化
Antonios Makris, D. Michail, Iraklis Varlamis, Chronis Dimitropoulos, K. Tserpes, G. Tsatsaronis, J. Haupt, M. Sawyer
{"title":"Parallelization of Large-Scale Drug-Protein Binding Experiments","authors":"Antonios Makris, D. Michail, Iraklis Varlamis, Chronis Dimitropoulos, K. Tserpes, G. Tsatsaronis, J. Haupt, M. Sawyer","doi":"10.1109/HPCS.2017.39","DOIUrl":"https://doi.org/10.1109/HPCS.2017.39","url":null,"abstract":"Drug polypharmacology or “drug promiscuity” refers to the ability of a drug to bind multiple proteins. Such studies have huge impact to the pharmaceutical industry, but in the same time require large investments on wet-lab experiments. The respective in-silico experiments have a significantly smaller cost and minimize the expenses for the subsequent lab experiments. However, the process of finding similar protein targets for an existing drug, passes through protein structural similarity and is a highly demanding in computational resources task. In this work, we propose several algorithms that port the protein similarity task to a parallel high-performance computing environment. The differences in size and complexity of the examined protein structures raise several issues in a naive parallelization process that significantly affect the overall time and required memory. We describe several optimizations for better memory and CPU balancing which achieve faster execution times. Experimental results, on a high-performance computing environment with 512 cores and 2048GB of memory, demonstrate the effectiveness of our approach which scales well to large amounts of protein pairs.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120949198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Exploration on Large Amount of Relational Data through Keyword Queries 基于关键字查询的海量关系数据挖掘
D. Beneventano, F. Guerra, Yannis Velegrakis
{"title":"Data Exploration on Large Amount of Relational Data through Keyword Queries","authors":"D. Beneventano, F. Guerra, Yannis Velegrakis","doi":"10.1109/HPCS.2017.21","DOIUrl":"https://doi.org/10.1109/HPCS.2017.21","url":null,"abstract":"The paper describes a new approach for querying relational databases through keyword search by exploting Information Retrieval (IR) techniques. When users do not know the structures and the content, keyword search becomes the only efficient and effective solution for allowing people exploring a relational database. The approach is based on a unified view of the database relations (performed through the full disjunction operator), where its composing tuples will be considered as documents to be indexed and searched by means of an IR search engine. Moreover, as it happens in relational databases, the system can merge the data stored in different documents for providing a complete answer to the user. In particular, two documents can be joined because either their tuples in the original database share some Primary Key or, always in the original database, some tuple is connected by a Primary / Foreign Key Relation. Our preliminary proposal, the description of the tabular data structure for storing and retrieving the possible connections among the documents and a metrics for scoring the results are introduced in the paper.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133304252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Big-Data in Climate Change Models — A Novel Approach with Hadoop MapReduce 气候变化模型中的大数据——Hadoop MapReduce的一种新方法
J. C. Loaiza, G. Giuliani, G. Fiameni
{"title":"Big-Data in Climate Change Models — A Novel Approach with Hadoop MapReduce","authors":"J. C. Loaiza, G. Giuliani, G. Fiameni","doi":"10.1109/HPCS.2017.17","DOIUrl":"https://doi.org/10.1109/HPCS.2017.17","url":null,"abstract":"The goal of this work is to present a software package which is able to process binary climate data through spawning Map-Reduce tasks while introducing minimum computational overhead and without modifying existing application code. The package is formed by the combination of two tools, Pipistrello, a Java utility that allows users to execute Map-Reduce tasks over any kind of binary file, Tina a lightweight Python library that building on top of Pipistrello is able to process scientific dataset, including NetCDF files. We benchmarked the combination of this two tools using a test Apache Hadoop Cluster (4 nodes) and a “relatively” small data set (200 GB), obtaining encouraging results. When using larger clusters and larger storage space, Tina and Pipistrello should be able to scale-up and analyse hundreds of Terabytes of scientific data in a faster, easier and efficient way.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131811819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Fault Tolerance Manager with Distributed Coordinated Checkpoints for Automatic Recovery 具有用于自动恢复的分布式协调检查点的容错管理器
Jorge Villamayor, Dolores Rexachs, E. Luque
{"title":"A Fault Tolerance Manager with Distributed Coordinated Checkpoints for Automatic Recovery","authors":"Jorge Villamayor, Dolores Rexachs, E. Luque","doi":"10.1109/HPCS.2017.73","DOIUrl":"https://doi.org/10.1109/HPCS.2017.73","url":null,"abstract":"Components for High Performance Computing are continuously increasing to achieve more performance and satisfy scientific application users demands. To reduce the Mean Time To Repair in these systems and increment high availability, Fault Tolerance (FT) solutions are required. The checkpoint/restart approach is a widely used mechanism in FT solutions. One of the most used technique to take checkpoints in parallel applications implemented using Message Passing Interface is the coordinated checkpoints. In this paper a Fault Tolerance Manager (FTM) for coordinated checkpoint files is presented, to provide users automatic recovery from failures when losing computing nodes. This proposal makes the configuration of FT simpler and transparent for users without knowledge of their application implementation. Furthermore, system administrators are not required to install libraries in their cluster to support FTM. It takes advantage of node local storage to save checkpoints, and it distributes copies of them along all the computation nodes, avoiding the bottleneck of a central stable storage. This approach is particularly useful in IaaS cloud environments, where users have to pay for centralized stable storage services. This work is based on RADIC, a well- known architecture to provide fault tolerance in a distributed, flexible, automatic and scalable way. Experimental results shows the benefits of the presented approach in a private cluster and a well-known cloud computing environment, Amazon EC2.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115270114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Lightweight Enhanced Collaborative Key Management Scheme for Smart Home Application 用于智能家居应用的轻量级增强协同密钥管理方案
Sarra Naoui, Mohamed Elhoucine Elhdhili, L. Saïdane
{"title":"Lightweight Enhanced Collaborative Key Management Scheme for Smart Home Application","authors":"Sarra Naoui, Mohamed Elhoucine Elhdhili, L. Saïdane","doi":"10.1109/HPCS.2017.117","DOIUrl":"https://doi.org/10.1109/HPCS.2017.117","url":null,"abstract":"Key management is required to secure the smart home application in the context of Internet of Things (IoT). But, these applications might be unable to use existing Internet key management protocols because of the presence of resource limited nodes. In this paper, we propose a lightweight and secure key management scheme for smart homes. This solution is based on an existing collaborative scheme used to secure communication between a limited node and the network central device by offloading highly consuming cryptographic primitives to proxy nodes. To improve the security of this scheme, we propose to limit the participation in key derivation to the trustful proxies by integrating a trust management system. To assess our proposed solution, we introduce a security evaluation using Scyther tool and a formal validation concerning security properties. Then, we evaluate the computational costs to highlight energy savings.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125236231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Countermeasureing Zero Day Attacks: Asset-Based Approach 应对零日攻击:基于资产的方法
Farag Azzedin, Husam Suwad, Zaid Alyafeai
{"title":"Countermeasureing Zero Day Attacks: Asset-Based Approach","authors":"Farag Azzedin, Husam Suwad, Zaid Alyafeai","doi":"10.1109/HPCS.2017.129","DOIUrl":"https://doi.org/10.1109/HPCS.2017.129","url":null,"abstract":"There is no doubt that security issues are on the rise and defense mechanisms are becoming one of the leading subjects for academic and industry experts. In this paper, we focus on the security domain and envision a new way of looking at the security life cycle. We utilize our vision to propose an asset-based approach to countermeasure zero day attacks. To evaluate our proposal, we built a prototype. The initial results are promising and indicate that our prototype will achieve its goal of detecting zero-day attacks.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124637565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信