{"title":"Stochastic Modeling and Performance Analysis of Energy-Aware Cloud Data Center Based on Dynamic Scalable Stochastic Petri Net","authors":"Hua He, Shanchen Pang","doi":"10.31577/cai_2020_1-2_28","DOIUrl":"https://doi.org/10.31577/cai_2020_1-2_28","url":null,"abstract":"The characteristics of cloud computing, such as large-scale, dynamics, heterogeneity and diversity, present a range of challenges for the study on modeling and performance evaluation on cloud data centers. Performance evaluation not only finds out an appropriate trade-off between cost-benefit and quality of service (QoS) based on service level agreement (SLA), but also investigates the influence of virtualization technology. In this paper, we propose an Energy-Aware Optimization (EAO) algorithm with considering energy consumption, resource diversity and virtual machine migration. In addition, we construct a stochastic model for Energy-Aware Migration-Enabled Cloud (EAMEC) data centers by introducing Dynamic Scalable Stochastic Petri Net (DSSPN). Several performance parameters are defined to evaluate task backlogs, throughput, reject rate, utilization, and energy consumption under different runtime and machines. Finally, we use a tool called SPNP to simulate analytical solutions of these parameters. The analysis results show that DSSPN is applicable to model and evaluate complex cloud systems, and can help to optimize the performance of EAMEC data centers.","PeriodicalId":345268,"journal":{"name":"Comput. Informatics","volume":"56 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133651426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Method for Learning a Petri Net Model Based on Region Theory","authors":"Jiao Li, Ru Yang, Zhijun Ding, Meiqin Pan","doi":"10.31577/cai_2020_1-2_174","DOIUrl":"https://doi.org/10.31577/cai_2020_1-2_174","url":null,"abstract":"The deployment of robots in real life applications is growing. For better control and analysis of robots, modeling and learning are the hot topics in the field. This paper proposes a method for learning a Petri net model from the limited attempts of robots. The method can supplement the information getting from robot system and then derive an accurate Petri net based on region theory accordingly. We take the building block world as an example to illustrate the presented method and prove the rationality of the method by two theorems. Moreover, the method described in this paper has been implemented by a program and tested on a set of examples. The results of experiments show that our algorithm is feasible and effective.","PeriodicalId":345268,"journal":{"name":"Comput. Informatics","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126286593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Logic Petri Net-Based Repair Method of Process Models with Incomplete Choice and Concurrent Structures","authors":"Yuanxiu Teng, Yuyue Du, Liang Qi","doi":"10.31577/cai_2020_1-2_264","DOIUrl":"https://doi.org/10.31577/cai_2020_1-2_264","url":null,"abstract":"Current model repair methods cannot repair incomplete choice and concurrent structures precisely and simply. This paper presents a repair method of process models with incomplete choice and concurrent structures via logic Petri nets. The relation sets are constructed based on process trees, including branch sets, choice activity sets and concurrent activity sets. The deviations are determined by analyzing the relation between relation sets and activities in the optimal alignment. The model repair method is proposed for models with incomplete choice and concurrent structures via logic Petri nets according to different deviation positions. Finally, the correctness and effectiveness of the logic Petri net-based repair method are illustrated by simulation experiments.","PeriodicalId":345268,"journal":{"name":"Comput. Informatics","volume":"32 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132845703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhancui Li, Longri Wen, Jimin Liu, Quanqiu Jia, Chengri Che, Chengfeng Shi, Haiying Cai
{"title":"Fog and Cloud Computing Assisted IoT Model Based Personal Emergency Monitoring and Diseases Prediction Services","authors":"Zhancui Li, Longri Wen, Jimin Liu, Quanqiu Jia, Chengri Che, Chengfeng Shi, Haiying Cai","doi":"10.31577/cai_2020_1-2_5","DOIUrl":"https://doi.org/10.31577/cai_2020_1-2_5","url":null,"abstract":"Along with the rapid development of modern high-tech and the change of people's awareness of healthy life, the demand for personal healthcare services is gradually increasing. The rapid progress of information and communication technology and medical and bio technology not only improves personal healthcare services, but also brings the fact that the human being has entered the era of longevity. At present, there are many researches focused on various wearable sensing devices and implant devices and Internet of Things in order to capture personal daily life health information more conveniently and effectively, and significant results have been obtained, such as fog computing. To provide personal healthcare services, the fog and cloud computing is an effective solution for sharing health information. The health big data analysis model can provide personal health situation reports on a daily basis, and the gene sequencing can provide hereditary disease prediction. However, the injury mortality and emergency diseases since long ago caused death and great pain for the family. And there are no effective rescue methods to save precious lives and no methods to predict the disease morbidity likelihood. The purpose of this research is to capture personal daily health information based on sensors and monitoring emergency situations with the help of fog computing and mobile applications, and disease prediction based on cloud computing and big data analysis. Through the comparison of test results it was proved that the proposed emergency monitoring based on fog and cloud computing and the diseases prediction model based on big data analysis not only gain more of the rescue time than the traditional emergency treatment method, but they also accumulate lots of different personal healthcare related experience. The Taian 960 hospital of PLA and the Yanbian Hospital as IM testbed were joined to provide emergency monitoring tests, and to ensure the CVD and CVA morbidity likelihood medical big data analysis, the people around Taian city participated in personal health tests. Through the project, the five network layers architecture and integrated MAPE-K Model based EMDPS platform not only made the cooperation between hospitals feasible to deal with emergency situations, but also the Internet medicine for the disease prediction was built.","PeriodicalId":345268,"journal":{"name":"Comput. Informatics","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130445850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Improved PDR Localization Algorithm Based on Particle Filter","authors":"Wei Wang, Cunhua Wang, Zhaoba Wang, Xiaoqian Zhao","doi":"10.31577/cai_2020_1-2_340","DOIUrl":"https://doi.org/10.31577/cai_2020_1-2_340","url":null,"abstract":"Pedestrian Dead Reckoning (PDR) helps to realize step frequency detection, step estimation and direction estimation through data collected by inertial sensors such as accelerometer, gyroscope, magnetometer, etc. The initial positioning information is used to calculate the position of pedestrians at any time, which can be applied to indoor positioning technology researching. In order to improve the position accuracy of pedestrian track estimation, this paper improves the step frequency detection, step size estimation and direction detection in PDR, and proposes a particle swarm optimization particle filter (PSO-PF) PDR location algorithm. Using the built-in accelerometer information of the smart phone to carry out the step frequency detection, the step frequency parameter construction model is introduced to carry out the step estimation, the direction estimation is performed by the Kalman filter fusion gyroscope and the magnetometer information, and the positioning data is merged by using the particle filter. The fitness function in the particle swarm optimization process is changed in the localization algorithm to improve particle diversity and position estimation. The experimental results show that the error rate of the improved step frequency detection method is reduced by about 2.1% compared with the traditional method. The angle accuracy of the direction estimation is about 4.12° higher than the traditional method. The overall positioning accuracy is improved.","PeriodicalId":345268,"journal":{"name":"Comput. Informatics","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133213998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaowu Zhu, Haichun Sun, Yongcheng Duan, Xiang Dai, S. Saha
{"title":"Travel Mode Recognition from GPS Data Based on LSTM","authors":"Shaowu Zhu, Haichun Sun, Yongcheng Duan, Xiang Dai, S. Saha","doi":"10.31577/cai_2020_1-2_298","DOIUrl":"https://doi.org/10.31577/cai_2020_1-2_298","url":null,"abstract":"A large amount of GPS data contains valuable hidden information. With GPS trajectory data, a Long Short-Term Memory model (LSTM) is used to identify passengers' travel modes, i.e., walking, riding buses, or driving cars. Moreover, the Quantum Genetic Algorithm (QGA) is used to optimize the LSTM model parameters, and the optimized model is used to identify the travel mode. Compared with the state-of-the-art studies, the contributions are: 1. We designed a method of data processing. We process the GPS data by pixelating, get grayscale images, and import them into the LSTM model. Finally, we use the QGA to optimize four parameters of the model, including the number of neurons and the number of hidden layers, the learning rate, and the number of iterations. LSTM is used as the classification method where QGA is adopted to optimize the parameters of the model. 2. Experimental results show that the proposed approach has higher accuracy than BP Neural Network, Random Forest and Convolutional Neural Networks (CNN), and the QGA parameter optimization method can further improve the recognition accuracy.","PeriodicalId":345268,"journal":{"name":"Comput. Informatics","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126998990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Open Information Extraction System Using Sentence Difficulty Estimation","authors":"Vahideh Reshadat, Heshaam Faili","doi":"10.31577/cai_2019_4_986","DOIUrl":"https://doi.org/10.31577/cai_2019_4_986","url":null,"abstract":"The World Wide Web has a considerable amount of information expressed using natural language. While unstructured text is often difficult for machines to understand, Open Information Extraction (OIE) is a relation-independent extraction paradigm designed to extract assertions directly from massive and heterogeneous corpora. Allocation of low-cost computational resources is a main demand for Open Relation Extraction (ORE) systems. A large number of ORE methods have been proposed recently, covering a wide range of NLP tools, from ``shallow'' (e.g., part-of-speech tagging) to ``deep'' (e.g., semantic role labeling). There is a trade-off between NLP tools depth versus efficiency (computational cost) of ORE systems. This paper describes a novel approach called Sentence Difficulty Estimator for Open Information Extraction (SDE-OIE) for automatic estimation of relation extraction difficulty by developing some difficulty classifiers. These classifiers dedicate the input sentence to an appropriate OIE extractor in order to decrease the overall computational cost. Our evaluations show that an intelligent selection of a proper depth of ORE systems has a significant improvement on the effectiveness and scalability of SDE-OIE. It avoids wasting resources and achieves almost the same performance as its constituent deep extractor in a more reasonable time.","PeriodicalId":345268,"journal":{"name":"Comput. Informatics","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125275606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HSIC Regularized LTSA","authors":"Xinghua Zheng, Zhengming Ma, Hangjian Che, Lei Li","doi":"10.31577/cai_2019_4_917","DOIUrl":"https://doi.org/10.31577/cai_2019_4_917","url":null,"abstract":"Hilbert-Schmidt Independence Criterion (HSIC) measures statistical independence between two random variables. However, instead of measuring the statistical independence between two random variables directly, HSIC first transforms two random variables into two Reproducing Kernel Hilbert Spaces (RKHS) respectively and then measures the kernelled random variables by using Hilbert-Schmidt (HS) operators between the two RKHS. Since HSIC was first proposed around 2005, HSIC has found wide applications in machine learning. In this paper, a HSIC regularized Local Tangent Space Alignment algorithm (HSIC-LTSA) is proposed. LTSA is a well-known dimensionality reduction algorithm for local homeomorphism preservation. In HSIC-LTSA, behind the objective function of LTSA, HSIC between high-dimensional and dimension-reduced data is added as a regularization term. The proposed HSIC-LTSA has two contributions. First, HSIC-LTSA implements local homeomorphism preservation and global statistical correlation during dimensionality reduction. Secondly, HSIC-LTSA proposes a new way to apply HSIC: HSIC is used as a regularization term to be added to other machine learning algorithms. The experimental results presented in this paper show that HSIC-LTSA can achieve better performance than the original LTSA.","PeriodicalId":345268,"journal":{"name":"Comput. Informatics","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123820484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Group-Based Asynchronous Distributed Alternating Direction Method of Multipliers in Multicore Cluster","authors":"Dongxia Wang, Yong-mei Lei, Shenghong Jiang","doi":"10.31577/cai_2019_4_765","DOIUrl":"https://doi.org/10.31577/cai_2019_4_765","url":null,"abstract":"The distributed alternating direction method of multipliers (ADMM) algorithm is one of the effective methods to solve the global consensus optimization problem. Considering the differences between the communication of intra-nodes and inter-nodes in multicore cluster, we propose a group-based asynchronous distributed ADMM (GAD-ADMM) algorithm: based on the traditional star topology network, the grouping layer is added. The workers are grouped according to the process allocation in nodes and model similarity of datasets, and the group local variables are used to replace the local variables to compute the global variable. The algorithm improves the communication efficiency of the system by reducing communication between nodes and accelerates the convergence speed by relaxing the global consistency constraint. Finally, the algorithm is used to solve the logistic regression problem in a multicore cluster. The experiments on the Ziqiang 4000 showed that the GAD-ADMM reduces the system time cost by 35 % compared with the AD-ADMM.","PeriodicalId":345268,"journal":{"name":"Comput. Informatics","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130638431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Agent-Based System for Mobile Service Adaptation Using Online Machine Learning and Mobile Cloud Computing Paradigm","authors":"Piotr Nawrocki, B. Sniezynski, Jakub Kołodziej","doi":"10.31577/cai_2019_4_790","DOIUrl":"https://doi.org/10.31577/cai_2019_4_790","url":null,"abstract":"An important aspect of modern computer systems is their ability to adapt. This is particularly important in the context of the use of mobile devices, which have limited resources and are able to work longer and more efficiently through adaptation. One possibility for the adaptation of mobile service execution is the use of the Mobile Cloud Computing (MCC) paradigm, which allows such services to run in computational clouds and only return the result to the mobile device. At the same time, the importance of machine learning used to optimize various computer systems is increasing. The novel concept proposed by the authors extends the MCC paradigm to add the ability to run services on a PC (e.g. at home). The solution proposed utilizes agent-based concepts in order to create a system that operates in a heterogeneous environment. Machine learning algorithms are used to optimize the performance of mobile services online on mobile devices. This guarantees scalability and privacy. As a result, the solution makes it possible to reduce service execution time and power consumption by mobile devices. In order to evaluate the proposed concept, an agent-based system for mobile service adaptation was implemented and experiments were performed. The solution developed demonstrates that extending the MCC paradigm with the simultaneous use of machine learning and agent-based concepts allows for the effective adaptation and optimization of mobile services.","PeriodicalId":345268,"journal":{"name":"Comput. Informatics","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124939063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}