Int. J. Netw. Comput.最新文献

筛选
英文 中文
Preface: Special Issue on Workshop on Advances in Parallel and Distributed Computational Models 2015 前言:2015年并行和分布式计算模型进展研讨会特刊
Int. J. Netw. Comput. Pub Date : 2016-01-12 DOI: 10.15803/IJNC.6.1_1
A. Fujiwara, Susumu Matsumae
{"title":"Preface: Special Issue on Workshop on Advances in Parallel and Distributed Computational Models 2015","authors":"A. Fujiwara, Susumu Matsumae","doi":"10.15803/IJNC.6.1_1","DOIUrl":"https://doi.org/10.15803/IJNC.6.1_1","url":null,"abstract":"The 17th Workshop on Advances in Parallel and Distributed Computational Models (APDCM) -- held in conjunction with the International Parallel and Distributed Processing Symposium (IPDPS) on May 25-29, 2015, in Hyderabad, India, - aims to provide a timely forum for the exchange and dissemination of new ideas, techniques and research in the field of the parallel and distributed computational models. The APDCM workshop has a history of attracting participation from reputed researchers worldwide. The program committee has encouraged the authors of accepted papers to submit full-versions of their manuscripts to the International Journal of Networking and Computing (IJNC) after the workshop. After a thorough reviewing process, with extensive discussions, five articles on various topics have been selected for publication on the IJNC special issue on APDCM. On behalf of the APDCM workshop, we would like to express our appreciation for the large efforts of reviewers who reviewed papers submitted to the special issue. Likewise, we thank all the authors for submitting their excellent manuscripts to this special issue. We also express our sincere thanks to the editorial board of the International Journal of Networking and Computing, in particular, to the Editor-in-chief Professor Koji Nakano. This special issue would not have been possible without his support.","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126500558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Topology Management for Reducing Energy Consumption and Tolerating Failures in Wireless Sensor Networks 无线传感器网络中降低能耗和容错的拓扑管理
Int. J. Netw. Comput. Pub Date : 2016-01-12 DOI: 10.15803/IJNC.6.1_107
Qian Zhao, Y. Nakamoto
{"title":"Topology Management for Reducing Energy Consumption and Tolerating Failures in Wireless Sensor Networks","authors":"Qian Zhao, Y. Nakamoto","doi":"10.15803/IJNC.6.1_107","DOIUrl":"https://doi.org/10.15803/IJNC.6.1_107","url":null,"abstract":"We investigated energy efficient and fault tolerant topologies for wireless sensor networks (WSNs), addressing the need to minimize communication distances because the energy used for communication is proportional to the 2nd to 6th power of the distance. We also investigated the energy hole phenomenon, in which non-uniform energy usage among nodes causes non-uniform lifetimes. This, in turn, increases the communication distances and results in a premature shutdown of the entire network. Because some sensor nodes in a WSN may be unreliable, it must be tolerant to faults. A routing algorithm called the “energy hole aware energy efficient communication routing algorithm” (EHAEC) was previously proposed. It solves the energy hole problem to the maximum extent possible while minimizing the amount of energy used for communication, by generating an energy efficient spanning tree. In this paper, we propose two provisioned fault tolerance algorithms: EHAEC for one-fault tolerance (EHAEC-1FT) and the active spare selecting algorithm (ASSA). EHAEC-1FT is a variation of EHAEC. It identifies redundant communication routes using the EHAEC tree and guarantees 2-connectivity (i.e., tolerates the failure of one node). The ASSA attempts to find active spare nodes for critical nodes. It uses two impact factors, I± and I² , which can be adjusted so that the result is either more fault tolerant or energy efficient. The spare nodes fix failures by replacing them. In our simulations, EHAEC was 3.4 to 4.8 times more energy efficient than direct data transmission, and thus extended the WSN lifetime. EHAEC-1FT outperformed EHAEC in terms of energy efficiency when fault tolerance was the most important, and a fault tolerant redundancy was created when or before a failure occurred. Moreover, we demonstrated that the ASSA was more energy efficient than EHAEC-1FT, and the effect of using different I± and I² .","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124365622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Checkpointing Strategies for Scheduling Computational Workflows 计算工作流调度的检查点策略
Int. J. Netw. Comput. Pub Date : 2016-01-12 DOI: 10.15803/IJNC.6.1_2
G. Aupy, A. Benoit, H. Casanova, Y. Robert
{"title":"Checkpointing Strategies for Scheduling Computational Workflows","authors":"G. Aupy, A. Benoit, H. Casanova, Y. Robert","doi":"10.15803/IJNC.6.1_2","DOIUrl":"https://doi.org/10.15803/IJNC.6.1_2","url":null,"abstract":"We study the scheduling of computational workflows on compute resources that experience exponentially distributed failures. When a failure occurs, rollback and recovery is used to resume the execution from the last checkpointed state. The scheduling problem is to minimize the expected execution time by deciding in which order to execute the tasks in the workflow and deciding for each task whether to checkpoint it or not after it completes. We give a polynomial-time optimal algorithm for fork DAGs (Directed Acyclic Graphs) and show that the problem is NP-complete with join DAGs. We also investigate the complexity of the simple case in which no task is checkpointed. Our main result is a polynomial-time algorithm to compute the expected execution time of a workflow, with a given task execution order and specified to-be-checkpointed tasks. Using this algorithm as a basis, we propose several heuristics for solving the scheduling problem. We evaluate these heuristics for representative workflow configurations.Â","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132658994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Overhead-aware Load Distribution and System Shutdown for Energy-Efficient Computing 面向节能计算的开销感知负载分配和系统停机
Int. J. Netw. Comput. Pub Date : 2015-07-10 DOI: 10.15803/IJNC.5.2_304
Jörg Lenhardt, W. Schiffmann
{"title":"Overhead-aware Load Distribution and System Shutdown for Energy-Efficient Computing","authors":"Jörg Lenhardt, W. Schiffmann","doi":"10.15803/IJNC.5.2_304","DOIUrl":"https://doi.org/10.15803/IJNC.5.2_304","url":null,"abstract":"The energy consumption of server farms is steadily increasing. This is mainly due to an increasing number of servers which are often underutilized most of the time. In this paper we discuss various strategies to improve the energy efficiency of a datacenter measured by the average number of operations executed per Joule. We assume a collection of heterogeneous server nodes that are characterized by their SPECpower-benchmarks. If a time-variable divisible (work)load should to be executed on such a datacenter the energy efficiency can be improved by a smart decomposition of this load into appropriate chunks. In the paper we discuss a sophisticated load distribution strategy and extend it by an adaptive power management for dynamically switching underutilized servers to performance states with lower energy consumption. Of course, also transitions to higher performance/energy states are possible if required by the current load. We introduce a new time slice model that allows a reduction of the switching overhead by means of a few merge and adjust cycles. The resulting ALD+ strategy was evaluated in a webserver environment with real Wikipedia traces. It achieved significant reductions of the energy consumption by the combination of load distribution and server switching by means of the time slice model. Moreover, ALD+ can be easily integrated into any parallel webserver setup.","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":" 26","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120825868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Preface: Special Issue on the Second International Symposium on Computing and Networking 前言:第二届计算与网络国际研讨会特刊
Int. J. Netw. Comput. Pub Date : 2015-07-10 DOI: 10.15803/IJNC.5.2_252
K. Nakano
{"title":"Preface: Special Issue on the Second International Symposium on Computing and Networking","authors":"K. Nakano","doi":"10.15803/IJNC.5.2_252","DOIUrl":"https://doi.org/10.15803/IJNC.5.2_252","url":null,"abstract":"The Second International Symposium on Computing and Networking (CANDAR 2014) was held in Shizuoka, Japan, from December 10th to 12th, 2014. The organizers of the CANDAR 2014 invited authors to submit the extended version of the presented papers. As a result, 14 articles have been submitted to this special issue. This issue includes the extended version of 7 papers that have been accepted. This issue owes a great deal to a number of people who devoted their time and expertise to handle the submitted papers. In particular, I would like to thank the guest editors for the excellent review process: Professor Shuichi Ichikawa, Professor Katsunobu Imai, Professor Hidetsugu Irie, Professor Yoshiaki Kakuda, Professor Susumu Matsumae, Professor Toru Nakanishi, Professor Hiroyuki Sato, Professor Chisa Takano, and Professor Takashi Yokota. Words of gratitude are also due to the anonymous reviewers who carefully read the papers and provided detailed comments and suggestions to improve the quality of the submitted papers. This special issue would not have been without their efforts.","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124684293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enumerating Joint Weight of a Binary Linear Code Using Parallel Architectures: multi-core CPUs and GPUs 利用并行架构枚举二进制线性代码的联合权重:多核cpu和gpu
Int. J. Netw. Comput. Pub Date : 2015-07-10 DOI: 10.15803/IJNC.5.2_290
Shohei Ando, Fumihiko Ino, T. Fujiwara, K. Hagihara
{"title":"Enumerating Joint Weight of a Binary Linear Code Using Parallel Architectures: multi-core CPUs and GPUs","authors":"Shohei Ando, Fumihiko Ino, T. Fujiwara, K. Hagihara","doi":"10.15803/IJNC.5.2_290","DOIUrl":"https://doi.org/10.15803/IJNC.5.2_290","url":null,"abstract":"In this paper, we present a parallel algorithm for enumerating joint weight of a binary linear $(n,k)$ code, aiming at accelerating assessment of its decoding error probability for network coding. Our algorithm is implemented on a multi-core CPU system and an NVIDIA graphics processing unit (GPU) system using OpenMP and compute unified device architecture (CUDA), respectively. To reduce the number of pairs of codewords to be investigated, our parallel algorithm reduces dimension $k$ by focusing on the all-one vector included in many practical codes. We also employ a population count instruction to compute joint weight of codewords with a less number of instructions. Furthermore, an efficient atomic vote and reduce scheme is deployed in our GPU-based implementation. We apply our CPU- and GPU-based implementations to a subcode of a (127,22) BCH code to evaluate the impact of acceleration.","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129265855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hypercube Fault Tolerant Routing with Bit Constraint 具有位约束的超立方体容错路由
Int. J. Netw. Comput. Pub Date : 2015-07-10 DOI: 10.15803/IJNC.5.2_272
A. Bossard, K. Kaneko
{"title":"Hypercube Fault Tolerant Routing with Bit Constraint","authors":"A. Bossard, K. Kaneko","doi":"10.15803/IJNC.5.2_272","DOIUrl":"https://doi.org/10.15803/IJNC.5.2_272","url":null,"abstract":"Thanks to its simple definition, the hypercube topology is very popular as interconnection network of parallel systems. There have been several routing algorithms described for the hypercube topology, yet in this paper we focus on hypercube routing extended with an additional restriction: bit constraint. Concretely, path selection is performed on a particular subset of nodes: the nodes are required to satisfy a condition regarding their bit weights (a.k.a. Hamming weights). There are several applications to such restricted routing, including simplification of disjoint paths routing. We propose in this paper two hypercube routing algorithms enforcing such node restriction: first, a shortest path routing algorithm, second a fault tolerant point-to-point routing algorithm. Formal proof of correctness and complexity analysis for the described algorithms are conducted. We show that the shortest path routing algorithm proposed is time optimal. Finally, we perform an empirical evaluation of the proposed fault tolerant point-to-point routing algorithm so as to inspect its practical behaviour. Along with this experimentation, we analyse further the average performance of the proposed algorithm by discussing the average Hamming distance in a hypercube when satisfying a bit constraint.","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131026765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Extensions of Access-Point Aggregation Algorithm for Large-scale Wireless Local Area Networks 大规模无线局域网接入点聚合算法的扩展
Int. J. Netw. Comput. Pub Date : 2015-01-10 DOI: 10.15803/IJNC.5.1_200
Md. Ezharul Islam, N. Funabiki, T. Nakanishi
{"title":"Extensions of Access-Point Aggregation Algorithm for Large-scale Wireless Local Area Networks","authors":"Md. Ezharul Islam, N. Funabiki, T. Nakanishi","doi":"10.15803/IJNC.5.1_200","DOIUrl":"https://doi.org/10.15803/IJNC.5.1_200","url":null,"abstract":"Recently, many organizations such as universities and companies have deployed wireless local area networks (WLANs) to cover the whole site for ubiquitous network services. In these WLANs, wireless access-points (APs) are often managed independently by different groups such as laboratories or departments. Then, a host may detect signals from multiple APs, which can degrade the communication performance due to radio interferences among them and increase operational costs. Previously, we proposed the AP aggregation algorithm to solve this problem by minimizing the number of active APs through aggregating them using the virtual AP technology . However, our extensive simulations in various instances found that 1) the minimization of active APs sometimes excessively degrades the network performance, and 2) the sequential optimization of host associations does not always reach optimal where slow links are still used. In this paper, we propose two extensions of the AP aggregation algorithm to solve these problems by 1) ensuring the minimum average throughput for any host by adding active APs and 2) further optimizing host associations by changing multiple hosts simultaneously in the host association finalization phase. We verify the effectiveness through simulations in four network instances using the WIMNET simulator.Â","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133945624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Linear Performance-Breakdown Model: A Framework for GPU kernel programs performance analysis 线性性能分解模型:GPU内核程序性能分析的框架
Int. J. Netw. Comput. Pub Date : 2015-01-10 DOI: 10.15803/IJNC.5.1_86
Mario Alberto Chapa Martell, Hiroyuki Sato
{"title":"Linear Performance-Breakdown Model: A Framework for GPU kernel programs performance analysis","authors":"Mario Alberto Chapa Martell, Hiroyuki Sato","doi":"10.15803/IJNC.5.1_86","DOIUrl":"https://doi.org/10.15803/IJNC.5.1_86","url":null,"abstract":"In this paper we describe our performance-breakdown model for GPU programs. GPUs are a popular choice as accelerator hardware due to their high performance, high availability and relatively low price. However, writing programs that are highly efficient represents a difficult and time consuming task for programmers because of the complexities of GPU architecture and the inherent difficulty of parallel programming. That is the reason why we propose the Linear Performance-Breakdown Model Framework as a tool to assist in the optimization process. We show that the model closely matches the behavior of the GPU by comparing the execution time obtained from experiments in two different types of GPU, an Accelerated Processing Unit (APU) and a GTX660, a discrete board. We also show performance-breakdown results obtained from applying the modeling strategy and how they indicate the time spent during the computation in each of the three Mayor Performance Factors that we define as processing time, global memory transfer time and shared memory transfer time.Â","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127811126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Identification and Elimination of Platform-Specific Code Smells in High Performance Computing Applications 高性能计算应用中平台特定代码气味的识别和消除
Int. J. Netw. Comput. Pub Date : 2015-01-10 DOI: 10.15803/IJNC.5.1_180
Chunyan Wang, S. Hirasawa, H. Takizawa, Hiroaki Kobayashi
{"title":"Identification and Elimination of Platform-Specific Code Smells in High Performance Computing Applications","authors":"Chunyan Wang, S. Hirasawa, H. Takizawa, Hiroaki Kobayashi","doi":"10.15803/IJNC.5.1_180","DOIUrl":"https://doi.org/10.15803/IJNC.5.1_180","url":null,"abstract":"A code smell is a code pattern that might indicate a code or design problem, which makes the application code hard to evolve and maintain. Automatic detection of code smells has been studied to help users find which parts of their application codes should be refactored. However, code smells have not been defined in a formal manner. Moreover, existing detection tools are designed mainly for object-oriented applications, but rarely provided for high performance computing (HPC) applications. HPC applications are usually optimized for a particular platform to achieve a high performance, and hence have special code smells called platform-specific code smells (PSCSs). The purpose of this work is to develop a code smell alert system to help users find PSCSs of HPC applications to improve the performance portability across different platforms. This paper presents a PSCS alert system that is based on an abstract syntax tree (AST) and XML. Code patterns of PSCSs are defined in a formal way using the AST information represented in XML. XML Path Language (XPath) is used to describe those patterns. A database is built to store the transformation recipes written in XSLT files for eliminating detected PSCSs. The recall and precision evaluation results obtained by using real applications show that the proposed system can detect potential PSCSs accurately. The evaluation on performance portability of real applications demonstrates that eliminating PSCSs leads to significant performance  changes and therefore the code portions with detected PSCSs have to be refactored to improve the performance portability across multiple platforms.Â","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127879866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信