{"title":"Intrusion detection framework using stacked auto encoder based deep neural network in IOT network","authors":"G. Sugitha, B. C. Preethi, G. Kavitha","doi":"10.1002/cpe.7401","DOIUrl":"https://doi.org/10.1002/cpe.7401","url":null,"abstract":"Security is of paramount importance in the number of systems affiliated with increased IoT. Therefore, in this manuscript, a Stacked Auto Encoder based Deep Neural Network (DNN) fostered Intrusion Detection Framework is proposed to secure the IoT Environment. Here, the data is given to the preprocessing stage, in which redundancy elimination and replacement of missing value are done. Then, the preprocessed output is given to the feature selection process. Wherein, the Golden eagle optimization (GEO) algorithm selects the optimum features from pre‐processed data sets. Then selected features are given to the Stacked Auto encoder based deep neural network for classification, which classified the data, like normal, anomalies. Here, the proposed approach is implemented in Python language. To check the robustness of the proposed approach, the performance metrics, like accuracy, specificity, sensitivity, F‐measure, precision, and recall is measured. The simulation outcome show that the proposed Stacked Auto Encoder based Deep Neural Network based Intrusion Detection Framework (IDS‐FS‐GEO‐SAENN) method attains higher accuracy 99.75%, 97.85%, 95.13%, and 98.79, higher sensitivity 96.34%, 91.23%, 89.12%, and 87.25%, higher specificity 93.67%, 92.37%, 98.47%, and 94.78% compared with the existing methods, like FS‐SMO‐SDPN, FS‐WO‐RNNLSTM, FS‐hybrid GWOPSO‐RF, and FS‐CNNLSTMGRU, respectively.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80682677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Songshou Dong, Yihua Zhou, Yuguang Yang, Yanqing Yao
{"title":"A certificateless ring signature scheme based on lattice","authors":"Songshou Dong, Yihua Zhou, Yuguang Yang, Yanqing Yao","doi":"10.1002/cpe.7385","DOIUrl":"https://doi.org/10.1002/cpe.7385","url":null,"abstract":"For the problems that ring signature needs key escrow, has some security risks, and cannot avoid malicious key generation center (KGC) in the post‐quantum era, we design a certificateless ring signature scheme based on lattice (L‐CRSS). In our scheme, the bimodal Gaussian distribution is used to improve the security and efficiency. Compared with the previous ring signature scheme based on lattice, our design does not need key escrow, does not disclose the signer's identity, can avoid malicious KGC, achieves the constant signature size, and has higher security and efficiency in the post‐quantum era. Finally, under random oracle model (ROM), we prove that our scheme is anonymous against the full‐key exposure, and existentially unforgeable against adaptive chosen message attacks (EUF‐CMA).","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87234259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. O. M. Aarif, C. M. Yousuff, B. Hashim, C. M. Hashim, Poruran Sivakumar
{"title":"Smart bin: Waste segregation system using deep learning‐Internet of Things for sustainable smart cities","authors":"K. O. M. Aarif, C. M. Yousuff, B. Hashim, C. M. Hashim, Poruran Sivakumar","doi":"10.1002/cpe.7378","DOIUrl":"https://doi.org/10.1002/cpe.7378","url":null,"abstract":"Waste management is a major issue with the emerging growth in the world population, and we need to find efficient ways to recycle and reuse waste. Segregating waste has become a primary need in waste management as different types of waste like Bio & Non‐Bio‐degradable waste should be processed differently. Effective waste isolation at the fundamental level is especially required for this. Several Smart cities oriented smart garbage management systems are also proposed using Internet of Things (IoT) and GSM. The existing smart bins using IoT and wireless sensor network (WSN) are dependent significantly on two major things. First, multiple types of sensors, as a single sensor may not be able to detect different material waste, and second, the console (Microcontroller, Arduino Raspberry Pi) and connectivity which in turn dependent on programming and operating system. These limitations of the embedded smart bin are overcome by combining IoT with artificial intelligence approaches such as deep neural network (DNN) systems. In this paper, we have presented a Friendly Waste Segregator Using Deep Learning and the IoT to classify and isolate the waste objects as biodegradable and nonbiodegradable. Our proposed method utilizes, a robust deep learning network to classify the waste accurately and IoT for monitoring and connectivity using various sensors. Our proposed method with initial training can identify and segregte real‐time waste objects without human intervention with an average accuracy of 97.49 %. Our smart bin intends to provide optimized waste management of bio and non‐bio‐waste and help to build an ecologically safe society.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78213199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Graph convolutional neural networks‐based assessment of students' collaboration ability","authors":"Jinjiao Lin, Tianqi Gao, Yuhua Wen, Xianmiao Yu, Bi-Zhen You, Yanfang Yin, Yanze Zhao, Haitao Pu","doi":"10.1002/cpe.7395","DOIUrl":"https://doi.org/10.1002/cpe.7395","url":null,"abstract":"As 21st‐century skills have become increasingly important, collaboration ability is now considered essential in many areas of life. Different theoretical frameworks and assessment tools have emerged to measure this skill. However, more applied studies on its implementation and assessment in current educational settings are required. This research accordingly uses Graph Convolutional Neural Networks (GCNs) to assess students' collaboration ability from students' assignments. The Pearson correlation coefficient is used to measure the similarity of the level of students' collaboration ability, and similar students are linked together to establish an adjacency matrix. By sorting through relevant literature and selecting the feature words that represent the strength of collaboration ability, calculating the similarity between the preprocessed student data and each selected feature word, after which the highest value of the similarity as the feature value of the student for this feature and establish the student feature matrix. Finally, the GCNs are jointly trained by the adjacency matrix and the feature matrix. The results show that this method can effectively assess students' collaboration ability. Moreover, compared with other text classification methods, the GCNs selected in this paper has higher accuracy.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87335141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bitcoin price prediction using optimized multiplicative long short term memory with attention mechanism using modified cuckoo search optimization","authors":"Aarif Ahamed Shahul Hameed, Chandrasekar Ravi","doi":"10.1002/cpe.7384","DOIUrl":"https://doi.org/10.1002/cpe.7384","url":null,"abstract":"For the past few years, Bitcoin plays a vital role in both the economical and financial industries. In order to gain a huge return on investment, the investors are eager to forecast the future value of Bitcoin. However, Bitcoin price variation is quite nonlinear and chaotic in nature, so it creates more difficulty in forecasting future value. Researchers found that the multiplicative long short term memory (LSTM) model will be more efficient for predicting those complex variations. So, target mission is about to develop an optimized multiplicative LSTM with an Attention mechanism using Technical Indicators derived from historical data. A modified cuckoo search optimization model is proposed to tune the hyperparameter of the Deep Learning model. This novel optimization algorithm eliminates the local optimum and slower convergence problem of the cuckoo search optimization algorithm. Deibold Mariano test is performed to statistically evaluate the proposed model and it is inferred that the recommended methodology is statistically fit. Regression metrics such as root mean square error, mean square error and mean absolute error has been used for comparative evaluation with related benchmark techniques such as genetic algorithm optimized LSTM (GA–LSTM), particle swarm optimized LSTM (PSO–LSTM) and cuckoo search optimized LSTM (CSO–LSTM). The empirical result shows that the recommended methodology outperforms the taken benchmark models and provides better accuracy.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"74 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86361937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An effective blockchain‐based smart contract system for securing electronic medical data in smart healthcare application","authors":"Ansar Sonya, G. Kavitha","doi":"10.1002/cpe.7363","DOIUrl":"https://doi.org/10.1002/cpe.7363","url":null,"abstract":"In today's world, data management plays a key role in smart healthcare applications. However, data availability, trustworthiness, confidentiality, and security are the major issues faced by current healthcare data management systems. The modern healthcare systems manage Electronic Medical Records (EMR) using a centralized manner, which increases the single point of failure in the event of a natural catastrophe. In this paper, a new robust Blockchain‐based Medical Cloud (BC‐MedCl) framework has been proposed to provide secure EMR sharing between patient and doctor. Primarily, Internet of Things (IoT) devices will gather the health‐related data of the patient periodically. The proposed framework then stores the encrypted EMRs in cloud storage while their correlating hash values are placed into the blockchain. Finally, a decentralized selective smart contract‐based access control mechanism is developed to enhance the security of the proposed system. The prototype file‐sharing performance of the proposed architecture has been evaluated using the Ethereum platform. The performance results manifest that the proposed blockchain framework is more effective to handle EMR in the real‐time healthcare system with a superior accuracy ratio of 98.7% and a lesser latency ratio of 25% as compared with the existing systems.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"145 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86208402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligent deep learning‐based hierarchical clustering for unstructured text data","authors":"Bankapalli Jyothi, Sumalatha Lingamgunta, Suneetha Eluri","doi":"10.1002/cpe.7388","DOIUrl":"https://doi.org/10.1002/cpe.7388","url":null,"abstract":"Document clustering is a technique used to split the collection of textual content into clusters or groups. In modern days, generally, the spectral clustering is utilized in machine learning domain. By using a selection of text mining algorithms, the diverse features of unstructured content is captured for ensuing in rich descriptions. The main aim of this article is to enhance a novel unstructured text data clustering by a developed natural language processing technique. The proposed model will undergo three stages, namely, preprocessing, features extraction, and clustering. Initially, the unstructured data is preprocessed by the techniques such as punctuation and stop word removal, stemming, and tokenization. Then, the features are extracted by the word2vector using continuous Bag of Words model and term frequency‐inverse document frequency. Then, unstructured features are performed by the hierarchical clustering using the optimizing the cut‐off distance by the improved sensing area‐based electric fish optimization (FISA‐EFO). Tuned deep neural network is used for improving the clustering model, which is proposed by same algorithm. Thus, the results reveal that the model provides better clustering accuracy than other clustering techniques while handling the unstructured text data.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73900134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OCTRA‐5G: Osmotic computing based task scheduling and resource allocation framework for 5G","authors":"Akashdeep Kaur, Rajesh Kumar, S. Saxena","doi":"10.1002/cpe.7369","DOIUrl":"https://doi.org/10.1002/cpe.7369","url":null,"abstract":"Long term evolution (LTE) mobile technology provides high data rate and low latency. 5G Technology is capable of handling the increasing number of IoT devices and provides ultra‐low latency, higher throughput, and higher reliability. Mobile edge computing (MEC) a key 5G technology strengthens the real‐time processing ability, releases the load on the Core Network, and helps in the real‐time processing of data, fulfilling the promise of high data rate and low latency. MEC is used to manage services efficiently to the near user resource. Using Osmotic Computing the services are efficiently scheduled and migrated. The work presented in this article proposes OCTRA‐5G Framework to effectively schedule services and allocate resources using Osmotic Computing (OC) by segregating the services into microservices and macroservices. The results are validated on the sets of 10, 20, and 30 gNBs (base stations) through simulation. OCTRA‐5G is tested on First Come First Serve (FCFS), Priority Scheduling (PS), and Shortest Job First (SJF) algorithm. FCFS provides less time complexity and higher throughput. The results presented using numerical simulations shows better performance by an average of 66.921% with OC than without OC.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89806133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature selection in high‐dimensional microarray cancer datasets using an improved equilibrium optimization approach","authors":"K. Balakrishnan, R. Dhanalakshmi","doi":"10.1002/cpe.7381","DOIUrl":"https://doi.org/10.1002/cpe.7381","url":null,"abstract":"Optimal feature selection of a high‐dimensional micro‐array datasets has gained a significant importance in medical applications for early detection and prevention of disease. Traditional Optimal feature selection percolates through a population‐based meta‐heuristic optimization technique, a Machine Learning classifier and traditional wrapper method for transforming the original feature set into a better feature set. These techniques require a number of iterations for the convergence of random solutions to the global optimum with high‐dimensionality issues such as over‐fitting, memory constraints, computational costs, and low accuracy. In this article, an efficient equilibrium optimization technique is proposed for an optimized feature selection that increases the diversity of the population in the search space through Random Opposition based learning and classify the best features using a 10‐fold cross‐validation‐based wrapper method. The proposed method is tested with six standard micro‐array datasets and compared with the conventional algorithms such as Marine Predators Algorithm, Harris Hawks Optimization, Whale Optimization Algorithm, and conventional Equilibrium Optimization. From the statistical results using the standard metrics, it is interpreted that the proposed method converges to the global minimum in a few iterations through optimized feature selection, fitness value and higher classification accuracy. This proves its efficacy in exploring and finding a better solution as compared to the counterpart algorithms. In addition to complexity analysis, these results indicate a global optimum solution, an effective representation of least amount of data‐high dimensionality reduction and an avoidance of over‐fitting problems. The source code is available at https://github.com/balasv/ROBL‐EOA/blob/main/ROBL_EOA.ipynb","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74226172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Fahad, M. Shojafar, Mubashir Abbas, Israr Ahmed, H. Ijaz
{"title":"A multi‐queue priority‐based task scheduling algorithm in fog computing environment","authors":"Muhammad Fahad, M. Shojafar, Mubashir Abbas, Israr Ahmed, H. Ijaz","doi":"10.1002/cpe.7376","DOIUrl":"https://doi.org/10.1002/cpe.7376","url":null,"abstract":"Fog computing is a novel, decentralized and heterogeneous computing environment that extends the traditional cloud computing systems by facilitating task processing near end‐users on computing resources called fog nodes. These diverse and resource‐constrained fog devices process a large volume of tasks generated by various fog applications. These tasks are generated by various applications, some of which may be latency‐sensitive, while others may tolerate some degree of delay in their normal functions. Task scheduling determines when a task should be allocated to a computing resource and how long that task can occupy the assigned resource. The majority of task scheduling algorithms focus on prioritizing the latency‐sensitive tasks only, which results in the long waiting time for the other type of tasks. Hence, these priority‐based schedulers cause task starvation for less important tasks while achieving delay‐optimal results for latency‐sensitive tasks. As a result, in this paper, we propose MQP, a multi‐queue priority‐based preemptive task scheduling approach that achieves a balanced task allocation for those applications that can tolerate a certain amount of processing delay and the latency‐sensitive fog applications. At run‐time, the MQP algorithm categorizes tasks as short and long based on their burst time. MQP algorithm maintains a separate task queue for each task category and dynamically updates the time slot value for preemption. The proposed technique's major purpose is to reduce response time for those data‐intensive applications in the fog computing environment, which include both latency‐sensitive tasks and tasks which are less latency‐sensitive, thereby addressing the starvation problem for less latency‐sensitive tasks. A smart traffic management case study is created to model a scenario with both latency‐sensitive short and less latency‐sensitive long tasks. We implement the MQP algorithm using iFogSim and confirm that it reduces the service latencies for long tasks. Simulation results show that the MQP algorithm allocates tasks to a fog device more efficiently and reduces the service latencies for long tasks. The average value of percentage reduction in the latency across all experimental configurations achieved is 22.68% and 38.45% in comparison to First Come‐First Serve and shortest job first algorithms.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91265898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}