{"title":"ProACT: Probabilistic Analysis and Countermeasures Tool for Blockchain Supply Chains With Smart Contracts Composition","authors":"Rangu Manjula, Naveen Chauhan","doi":"10.1002/cpe.70004","DOIUrl":"https://doi.org/10.1002/cpe.70004","url":null,"abstract":"<div>\u0000 \u0000 <p>This study introduces the ProACT (Probabilistic Analysis and Countermeasures Tool) framework, which uses Bayesian Networks, Colored Petri Nets, and NuSMV (new symbolic model validator) to check and improve smart contracts in blockchain-based supply chains. The main aim is to make smart contracts safer and more reliable in the Agrochem and Fertilizers Supply Chain. The research focuses on the need for better security in blockchain applications. There is a gap in current methods that handle the unpredictable nature of smart contracts. This study seeks to fill this gap with a new approach. The ProACT framework combines different models. Bayesian Networks help understand dependencies and uncertainties. Colored Petri Nets show how contracts change over time. NuSMV checks the contracts formally to find issues. The study includes a detailed case study and tests. Results show that ProACT is better at finding problems and checking contracts quickly. It also reduces the impact of attacks on the system. The findings are important because they offer a new way to make smart contracts safer. This helps improve the overall security of blockchain-based supply chains.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143380642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Managing Clinical Research on Blockchain Using FAIR Principles","authors":"Seyma Cihan, Adnan Ozsoy, Oya Deniz Beyan","doi":"10.1002/cpe.70005","DOIUrl":"https://doi.org/10.1002/cpe.70005","url":null,"abstract":"<div>\u0000 \u0000 <p>Blockchain technology has the potential to extend beyond its traditional use in cryptocurrency and make significant strides in critical sectors like healthcare. Clinical research, which plays a pivotal role in enhancing healthcare quality by guiding activities, determining equipment usage, and recommending preferred medications, stands to benefit greatly from blockchain integration. The unique technical capabilities of blockchain offer promising solutions across various phases of clinical research, from study design and patient recruitment to report study findings. By addressing current challenges in the clinical research process, blockchain technology can notably enhance research quality and, consequently, improve patient care. Although conceptual framework studies regarding blockchain technology are in the available literature, practical implementations of this technology remain relatively scarce. Thus, in this study, a private permissioned Hyperledger Fabric blockchain platform was developed to manage clinical research. As a use case, a blockchain-based distributed framework for counting and reporting COVID-19 epidemiological parameters and statistics among healthcare centers has been defined in the study. Besides, to make clinical research data findable, accessible, interoperable, and reusable (FAIR), we integrated FAIR principles into the developed blockchain-based clinical research management system. Additionally, a use case logic has been implemented as a smart contract (chaincode) and invoked on Fabric Network. This study, in general, represents a crucial step towards bridging the gap between theoretical understanding and real-world application within the domain of blockchain technology. Moreover, the performance of the Fabric Network was evaluated by analyzing the chain code execution performance according to the size of the patient data. By deploying a functioning clinical research network and executing smart contracts, this study contributes to the practical utilization of blockchain technology along with FAIR principles integration to the entire clinical research process, which is a first in the literature.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143380646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cascade Ownership Verification Framework Based on Invisible Watermark for Model Copyright Protection","authors":"Ruoxi Wang, Yujia Zhu, Xia Daoxun","doi":"10.1002/cpe.8394","DOIUrl":"https://doi.org/10.1002/cpe.8394","url":null,"abstract":"<div>\u0000 \u0000 <p>Successfully training a model requires substantial computational power, excellent model design, and high training costs, which implies that a well-trained model holds significant commercial value. Protecting a trained Deep Neural Network (DNN) model from Intellectual Property (IP) infringement has become a matter of intense concern recently. Particularly, embedding and verifying watermarks in black-box models without accessing internal model parameters, while ensuring the robustness and invisibility of the watermark, remains a challenging issue. Unlike many existing methods, we propose a cascade ownership verification framework based on invisible watermarks, with a focus on how to effectively protect the copyright of black-box watermark models and detect unauthorized users' infringement behaviors. This framework consists of two parts: watermark generation and copyright verification. In the watermark generation phase, watermarked samples are generated from key samples and label images. The difference between watermarked samples and key samples is imperceptible, while a specific identifier has been injected into the watermarked samples, leaving a backdoor as an entry point for copyright verification. The copyright verification phase employs hypothesis testing to enhance the confidence level of verification. In image classification tasks based on MNIST, CIFAR-10, and CIFAR-100 datasets, experiments were conducted on several popular deep learning models. The experimental results show that this framework offers high security and effectiveness in protecting model copyrights and demonstrates strong robustness against pruning and fine-tuning attacks.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143380193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shweta Kushwaha, Ravi Shankar Singh, Kanika Prajapati
{"title":"Multi-Objective Workflow Scheduling in Cloud Using Archimedes Optimization Algorithm","authors":"Shweta Kushwaha, Ravi Shankar Singh, Kanika Prajapati","doi":"10.1002/cpe.8393","DOIUrl":"https://doi.org/10.1002/cpe.8393","url":null,"abstract":"<div>\u0000 \u0000 <p>Cloud computing has changed the technology landscape for over a decade and led to an astounding growth in the number of applications it may be used for. Consequently, there has been a significant spike in the demand for improved algorithms to schedule workflows efficiently. These were mostly concerned with heuristic, metaheuristic, and hybrid approaches to workflow scheduling that mostly suffer from the problem of local optima entrapment. Due to such heavy traffic on the cloud resources, there is still a need for less computationally complex approaches. In light of this, this article proposes a novel approach: a multi-objective Modified Local Escaping Archimedes Optimization (MLEAO) algorithm for workflow scheduling. This strategy involves initialization of the population of Archimedes Optimization algorithm through the HEFT algorithm to provide an inclination towards the solutions with improved makespan while achieving a cost-efficient workflow scheduling decision and avoiding the problem of local optima entrapment using a local escaping operation. To validate the efficacy of our approach, we conducted extensive experiments using scientific workflows as benchmarks. Through our investigations, we significantly improved makespan, cost, resource utilization, and energy consumption. Moreover, the effectiveness of our proposed approach is also verified by performance metrics such as hypervolume, s-metric, and dominance relationships between the proposed and state-of-the-art approaches.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143380182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Distributed Information Granulation Method for Time Series Clustering","authors":"Yashuang Mu, Tian Liu, Wenqiang Zhang, Hongyue Guo, Lidong Wang, Xiaodong Liu","doi":"10.1002/cpe.8395","DOIUrl":"https://doi.org/10.1002/cpe.8395","url":null,"abstract":"<div>\u0000 \u0000 <p>Time series clustering is an important research problem in machine learning and data mining. With the rapid increase in the amount of time series data, many traditional clustering algorithms cannot directly deal with large-scale time series due to some limitations in the memory capacity and the execution time. In this study, we suggest a distributed information granulation method for large-scale time clustering problem. First, a distributed time series partitioning method is designed to randomly divide the original time series dataset into some data blocks. Then, the distributed time series granulation method is developed in the map-reduce framework by the principle of reasonable granularity, where each time series can be described by some representative data points to show the trend state information. Finally, we introduce the large-scale time series clustering method in terms of the fuzzy C-means clustering algorithm. The experimental studies demonstrate the feasibility and the effectiveness on several UCR publicly benchmark time series datasets. Compared with the classical clustering methods, the proposed method can achieve a 4.86–9.65% improvement in average clustering accuracy. Meanwhile, the proposed method exhibits more advantages in both unequal length time series clustering and execution time.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143380181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low-Light Image Enhancement Algorithm Based on Improved MSRCP With Chromaticity Preservation","authors":"Wenjian Feng, Zhiwen Wang, Chunmiao Wei, Xinhui Jiang, Yuhang Wang, Jiexia Huang","doi":"10.1002/cpe.8396","DOIUrl":"https://doi.org/10.1002/cpe.8396","url":null,"abstract":"<div>\u0000 \u0000 <p>In response to the issues of poor sharpness and low information entropy in traditional MSRCP (Multi-Scale Retinex with Color Restoration) algorithms for image enhancement, we propose an improved MSRCP algorithm for low-light image enhancement with chromaticity preservation. First, we replaced the extrema calculation method in the color restoration function with a calculation method based on clipped pixel ratios. Then, we combined guided filtering and Gaussian filtering to calculate the incident component. Finally, we conducted experiments using six different low-light images and compared the results with the traditional MSRCP algorithm, such as SSR, MSR, MSRCR, and MSRCP. The experimental results show that our method improved the sharpness and information entropy values in the five comparison images by 5.6%–35.6% and 0.18%–15.3%, respectively.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143380195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhixiang Zhao, Yanxia Wu, Guoyin Zhang, Yiqing Yang, Ruize Hong
{"title":"Recursive Hybrid Compression for Sparse Matrix-Vector Multiplication on GPU","authors":"Zhixiang Zhao, Yanxia Wu, Guoyin Zhang, Yiqing Yang, Ruize Hong","doi":"10.1002/cpe.8366","DOIUrl":"https://doi.org/10.1002/cpe.8366","url":null,"abstract":"<div>\u0000 \u0000 <p>Sparse Matrix-Vector Multiplication (SpMV) is a fundamental operation in scientific computing, machine learning, and data analysis. The performance of SpMV on GPUs is crucial for accelerating various applications. However, the efficiency of SpMV on GPUs is significantly affected by irregular memory access patterns, high memory bandwidth requirements, and insufficient exploitation of parallelism. In this paper, we propose a Recursive Hybrid Compression (RHC) method to address these challenges. RHC begins by splitting the initial matrix into two portions: an Ellpack (ELL) portion and a Coordinate (COO) portion. This partitioning is followed by further recursive division of the COO portion into additional ELL and COO portions, continuing this process until predefined termination criteria, based on a percentage threshold of the number of nonzero elements, are met. Additionally, we introduce a dynamic partitioning method to determine the optimal threshold for partitioning the matrix into ELL and COO portions based on the distribution of nonzero elements and the memory footprint. We develop the RHC algorithm to fully exploit the advantages of the ELL kernel on GPUs and achieve high thread-level parallelism. We evaluated our proposed method on two different NVIDIA GPUs: the GeForce RTX 2080 Ti and the A100, using a set of sparse matrices from the SuiteSparse Matrix Collection. We compare RHC with NVIDIA's cuSPARSE library and three state-of-the-art methods: SELLP, MergeBase, and BalanceCSR. RHC achieves average speedups of 2.13<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ times $$</annotation>\u0000 </semantics></math>, 1.13<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ times $$</annotation>\u0000 </semantics></math>, 1.87<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ times $$</annotation>\u0000 </semantics></math>, and 1.27<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ times $$</annotation>\u0000 </semantics></math> over cuSPARSE, SELLP, MergeBase, and BalanceCSR, respectively.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143380294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on Multi-Scale Parallel Joint Optimization CNN for Arrhythmia Diagnosis","authors":"Wenping Chen, Huibin Wang, Zhe Chen, Lili Zhang","doi":"10.1002/cpe.8383","DOIUrl":"https://doi.org/10.1002/cpe.8383","url":null,"abstract":"<div>\u0000 \u0000 <p>The morphological characteristics of electrocardiograms (ECGs) serve as a fundamental basis for diagnosing arrhythmias. Convolutional neural networks (CNNs), leveraging their local receptive field properties, effectively capture the morphological features of ECG signals and have been extensively employed in the automatic diagnosis of arrhythmias. However, the variability in the duration of ECG morphological features renders single-scale convolutional kernels inadequate for fully extracting these features. To address this limitation, this study proposes a multi-scale parallel joint optimization convolutional neural network (MPJO_CNN). The proposed method utilizes convolutional kernels of varying scales to extract ECG features, further refining these features via parallel computation and implementing a joint optimization strategy to enhance classification performance. Experimental results demonstrate that on the MIT-BIH arrhythmia database, this method not only achieved state-of-the-art performance, with an accuracy of 99.41% and an F1 score of 98.09%, but also showed high sensitivity to classes with fewer samples.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143380183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Numerical Analysis of the Effetcs of Blended Split Winglets on Aerodynamic Performance of Aircraft","authors":"Ensar Atasoy, Burhan Çuhadaroğlu","doi":"10.1002/cpe.8390","DOIUrl":"https://doi.org/10.1002/cpe.8390","url":null,"abstract":"<div>\u0000 \u0000 <p>Air flows with different pressures on the upper and lower surfaces of a wing meet at the wing tip and trailing edge owing to the finite dimensions of the aircraft wings. As a result, wing tip vortices are formed, which increase the induced drag and reduce the aerodynamic performance of the wing. In this study, inspired by bird wings, we aim to design a winglet that reduces wing-tip vortices using a biomimicry method. To verify the numerical method, the ONERA M6 wing, on which experimental and numerical studies were conducted by NASA, is used. The flow around the ONERA M6 wing is modeled using SOLIDWORKS software. The flow region around the wing, whose solid model is created, is divided into finite volumes using the ANSYS Fluent software. To verify the numerical model, a verification analysis is conducted on the pressure coefficient changes using experimental conditions in the literature. After the verification study, computations are performed at various angles of attack for the NACA 4412 winglet profile, which is added to the tip of the ONERA M6 wing at a sweep angle of 60° and different cant angles. The geometric configuration that provides the highest value for the aerodynamic performance parameter known as the lift coefficient to drag coefficient ratio (<i>C</i><sub>L</sub>/<i>C</i><sub>D</sub>) is determined. The results obtained from the study show that the highest <i>C</i><sub>L</sub>/<i>C</i><sub>D</sub> is obtained for the flow at 3.06° angle of attack and 15° cant angle winglet. Moreover, it is also calculated that the blended split winglet design, in which the upper winglet is designed to be longer with a 15° cant angle and the lower winglet to be shorter with a −30° cant angle, provides a 24.6% improvement in the <i>C</i><sub>L</sub>/<i>C</i><sub>D</sub> compared to the base wing.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143380189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Task Allocation Algorithm and Simulation Analysis for Multiple AMRs in Digital-Intelligent Warehouses","authors":"Zixia Chen, Tingquan Gu, Zelin Chen, Bingda Zhang","doi":"10.1002/cpe.8382","DOIUrl":"https://doi.org/10.1002/cpe.8382","url":null,"abstract":"<div>\u0000 \u0000 <p>In digital-intelligent warehouses, the heavy handling tasks, complex algorithms with high computational demands, and vast solution spaces pose significant challenges to achieving stable, efficient, and balanced operation of multiple Autonomous Mobile Robots (AMRs) for automated cargo handling. This paper focuses on a virtual smart warehouse environment and employs Python software to conduct simulation experiments for multi-AMR task allocation. The simulated smart warehouse comprises three idle AMRs and 16 task points that require transportation. The experimental simulations demonstrate that the improved genetic algorithm can find the global optimal solution with relatively low computational cost, meeting the fast response requirements in real-world operations. It enables stable operation, high efficiency, and balanced task allocation for multiple AMRs. The simulation results validate the reliability of the proposed method, effectively addressing the issues of multi-AMR task allocation and path planning in digital-intelligent warehouses.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143380194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}