{"title":"Block chain enabled auditing with optimal multi‐key homomorphic encryption technique for public cloud computing environment","authors":"Venkata Naga Rani Bandaru, P. Visalakshi","doi":"10.1002/cpe.7128","DOIUrl":"https://doi.org/10.1002/cpe.7128","url":null,"abstract":"Cloud storage is a significant cloud computing (CC) service which enables the client to save and retrieve the data at anytime and anywhere. Due to the increased demand and familiarity of the CC environment, different kinds of security threats and susceptibilities are raised. Data integrity and privacy are the major problems in CC environment where the data can be saved in distinct geographic regions. So, privacy preservation and data integrity become important factors of the user concern related to the CC environment. Several auditing protocols are majorly dependent upon the conventional public key infrastructure, which led to high computational complexity and it is unsuitable for the setting of multiple users. To resolve these issues, this study develops a new block chain enabled auditing with optimal multi‐key homomorphic encryption (BEA‐OMKHE) technique for public cloud environment. The proposed BEA‐OMKHE technique aims to assure data integrity, security, and auditing in public cloud storage. Besides, an OMKHE technique is derived to accomplish data integrating into the cloud environment by the design of end to end encryption system. A secure generation of keys and encryption processes are carried out by the use of MKHE technique; thereby the data becomes highly secure. In addition, the choice of keys is performed by the improved beetle antenna search optimization (IBAS) algorithm. Therefore, the proposed BEA‐OMKHE technique offers an efficient way of enhancing the data integrity in CC method. The performance validation of the BEA‐OMKHE technique takes place and the results are inspected under various aspects. The comparative result analysis ensured the betterment of the BEA‐OMKHE technique interms of different measures such as communication cost, encryption time, decryption time, computation cost, privacy preserving rate, and authentication accuracy.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"150 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87272427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weimin Li, Dingmei Wei, Xiaokang Zhou, Shaohua Li, Qun Jin
{"title":"F‐SWIR: Rumor Fick‐spreading model considering fusion information decay in social networks","authors":"Weimin Li, Dingmei Wei, Xiaokang Zhou, Shaohua Li, Qun Jin","doi":"10.1002/cpe.7166","DOIUrl":"https://doi.org/10.1002/cpe.7166","url":null,"abstract":"The spread of rumors has a major negative impact on social stability. Traditional rumor spreading models are mostly based on infectious disease models and do not consider the influence of individual differences and the network structure on rumor spreading. In this paper, we propose a rumor Fick‐spreading model that integrates information decay in social networks. The dissemination of rumors in social networks is random and uncertain and is affected by the dissemination capabilities of individuals and the network environment. The rumor Fick‐transition coefficient and Fick‐transition gradient are defined to determine the influence of the individual transition capacity and the network environment on rumor propagation, respectively. The Fick‐state transition probability is used to describe the probability of change of an individual's state. Moreover, an information decay function is defined to characterize the self‐healing probability of individuals. According to the different roles and reactions of users during rumor dissemination, the user state and the rumor dissemination rules among users are refined, and the influence of the network structure on the rumor dissemination is ascertained. The experimental results demonstrate that the proposed model outperforms other rumor spread models.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73126309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Denis, Julien Jaeger, E. Jeannot, Florian Reynier
{"title":"A methodology for assessing computation/communication overlap of MPI nonblocking collectives","authors":"Alexandre Denis, Julien Jaeger, E. Jeannot, Florian Reynier","doi":"10.1002/cpe.7168","DOIUrl":"https://doi.org/10.1002/cpe.7168","url":null,"abstract":"By allowing computation/communication overlap, MPI nonblocking collectives (NBC) are supposed to improve application scalability and performance. However, it is known that to actually get overlap, the MPI library has to implement progression mechanisms in software or rely on the network hardware. These mechanisms may be present or not, adequate or perfectible, they may have an impact on communication performance or may interfere with computation by stealing CPU cycles. From a user point of view, assessing and understanding the behavior of an MPI library concerning computation/communication overlap is difficult. In this article, we propose a methodology to assess the computation/communication overlap of NBC. We propose new metrics to measure how much communication and computation do overlap, and to evaluate how they interfere with each other. We integrate these metrics into a complete methodology. We compare our methodology with state of the art metrics and benchmarks, and show that ours provides more meaningful informations. We perform experiments on a large panel of MPI implementations and network hardware and show when and why overlap is efficient, nonexistent or even degrades performance.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89351200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meghana Thiyyakat, Subramaniam Kalambur, D. Sitaram
{"title":"Niyama: Node scheduling for cloud workloads with resource isolation","authors":"Meghana Thiyyakat, Subramaniam Kalambur, D. Sitaram","doi":"10.1002/cpe.7196","DOIUrl":"https://doi.org/10.1002/cpe.7196","url":null,"abstract":"Cloud providers place tasks from multiple applications on the same resource pool to improve the resource utilization of the infrastructure. The consequent resource contention has an undesirable effect on latency‐sensitive tasks. In this article, we present Niyama—a resource isolation approach that uses a modified version of deadline scheduling to protect latency‐sensitive tasks from CPU bandwidth contention. Conventionally, deadline scheduling has been used to schedule real‐time tasks with well‐defined deadlines. Therefore, it cannot be used directly when the deadlines are unspecified. In Niyama, we estimate deadlines in intervals and secure bandwidth required for the interval, thereby ensuring optimal job response times. We compare our approach with cgroups: Linux's default resource isolation mechanism used in containers today. Our experiments show that Niyama reduces the average delay in tasks by 3 ×$$ times $$ –20 ×$$ times $$ when compared to cgroups. Since Linux's deadline scheduling policy is work‐conserving in nature, there is a small drop in the server‐level CPU utilization when Niyama is used naively. We demonstrate how the use of core reservation and oversubscription in the inter‐node scheduler can be used to offset this drop; our experiments show a 1.3 ×$$ times $$ –2.24 ×$$ times $$ decrease in delay in job response time over cgroups while achieving high CPU utilization.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"90 8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87728387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient enhanced prefix hash tree model for optimizing the storage and image deduplication in cloud","authors":"G. Sujatha, R. Raj","doi":"10.1002/cpe.7199","DOIUrl":"https://doi.org/10.1002/cpe.7199","url":null,"abstract":"The popularity of the cloud storage space mainly attracted organizations to store their data in them. Therefore, the avoidance of duplicate data contents is unavoidable and several users share the cloud storage space for data storage, and sometimes this makes higher storage space utilization. Because of the extremely high duplicate copy, memory wastage arises in the case of multimedia data. Identifying the final duplicate copies in the cloud takes more time. To overcome this problem, we employ a significant storage optimization model for deduplication. The digital data hash value is stored by requiring an additional memory space. This study proposed an enhanced prefix hash tree (EPHT) method to optimize the image and text deduplication system to reduce the overhead caused by this procedure. The efficiency of the proposed approach is compared with the interpolation search technique using different levels of tree height (2, 4, 2, 8, 16) in terms of space and time complexity. The proposed EPHT technique shows improvements in terms of speed and space complexity when the number of levels in the EPHT increases.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91104143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on the application of artificial intelligence in computer network technology in the era of big data","authors":"Zhenyu Xu","doi":"10.1002/cpe.7262","DOIUrl":"https://doi.org/10.1002/cpe.7262","url":null,"abstract":"At present, artificial intelligence technology has been widely used. Artificial intelligence technology can not only enrich people's lives, effectively improve work efficiency, achieve technical development, but also improve the efficiency of enterprises, and bring rich profits for the development of enterprises. Therefore, the author systematically analyzes the advantages and disadvantages of AI technology, and expounds the application of AI technology in computer network technology from the aspects of network security technology, enterprise management technology, network system and evaluation technology.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86162570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FogiRecruiter: A fog‐enabled selection mechanism of crowdsourcing for disaster management","authors":"Riya Samanta, S. Ghosh","doi":"10.1002/cpe.7207","DOIUrl":"https://doi.org/10.1002/cpe.7207","url":null,"abstract":"In the Internet of Things framework, crowdsourcing (CS) has played a significant role. A sufficient number of participants carrying sensors or IoT devices are necessary to obtain maximum coverage within a given budget for CS a task. Cloud computing is used for centralized processing, storage, and large‐scale data analysis. The delay associated with transferring data to cloud servers creates a time‐consuming decision‐making process. Fog computing is responsible for this capability. As a result, FogiRecruiter, a novel framework, is offered to efficiently choose participants for data collection from the vital environment while staying within a budget. We also utilize fuzzy logic to pick the best fog nodes for relaying data to them and then to faraway cloud servers. Realizing emergency communication, despite the fact that a direct connection to the cloud is inconvenient. Simulations and prototype testing are used to show the efficacy of the proposed approach.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82124786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection and localization of image tampering in digital images with fused features","authors":"Mohassin Ahmad, F. Khursheed","doi":"10.1002/cpe.7191","DOIUrl":"https://doi.org/10.1002/cpe.7191","url":null,"abstract":"In digital forensics, image tamper detection and localization have attracted increased attention in recent days, where the standard methods have limited description ability and high computational costs. As a result, this research introduces a novel picture tamper detection and localization model. Feature extraction, tamper detection, as well as tamper localization are the three major phases of the proposed model. From the input digital images, a group of features like “Scale‐based Adaptive Speeded Up Robust Features (SA‐SURF), Discrete Wavelet Transform (DWT) based Patched Local Vector Pattern (LVP) features, HoG feature with harmonic mean based PCA and MBFDF” are extracted. Then, with this extracted feature strain the “optimized Convolutional Neural Network (CNN)” will be trained in the tamper detection phase. Since it is the key decision‐maker about the presence/absence of tamper, its weighting parameters are fine‐tuned via a novel improved Sea‐lion Customized Firefly algorithm (ISCFF) model. This ensures the enhancement of detection accuracy. Once an image is recognized to have tampers, then it is essential to identify the tamper localization. In the tamper localization phase, the copy‐move tampers are localized using the SIFT features, splicing tampers are localized using the DBN and the noise inconsistency is localized with a newly introduced threshold‐based tamper localization technique. The simulation outcomes illustrate that the adopted model attains better tamper detection as well as localization performance over the existing methods.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84515170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection and categorization of acute intracranial hemorrhage subtypes using a multilayer DenseNet‐ResNet architecture with improved random forest classifier","authors":"B. M. Jenefer, K. Senathipathi, Aarthi, Annapandi","doi":"10.1002/cpe.7167","DOIUrl":"https://doi.org/10.1002/cpe.7167","url":null,"abstract":"In this article, the detection and categorization of acute intracranial hemorrhage (ICH) subtypes using a multilayer DenseNet‐ResNet architecture with improved random forest classifier (IRF) is proposed to detect the subtypes of intracerebral hemorrhage with high accuracy and less computational time. Here, the brain CT images are taken from the physionet repository publicly dataset. Then the images are preprocessed to eliminate the unwanted noises. After that, the image features are extracted by using multilayer densely connected convolutional network (DenseNet) combined with residual network (ResNet) architecture with multiple convolutional layers. The subtypes are epidural hemorrhage (EDH), subarachnoid hemorrhage (SAH), intraparenchymal hemorrhage (IPH), subdural hemorrhage (SDH), intraventricular hemorrhage (IVH) are classified by using an IRF classifier with high accuracy. The simulation process is carried out in MATLAB site. The proposed multilayer‐DenseNet‐ResNet‐IRF attains higher accuracy 23.44%, 31.93%, 42.83%, 41.9% is compared with the existing methods, such as deep learning algorithm for automatic detection and classification of acute intracranial hemorrhages in head CT scans (ICH‐DC‐2D‐CNN), fusion‐based deep learning along nature‐inspired algorithm for the diagnosis of intracerebral hemorrhage (ICH‐DC‐FSVM), and detection of intracranial hemorrhage on CT scan images using convolutional neural network (ICH‐DC‐CNN) and double fully convolutional networks (FCNs), respectively.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82397746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}