{"title":"Mixed-Criticality Scheduling Toward Real-Time Applications in a Vehicular Edge Computing System","authors":"Biao Hu, Xincheng Yang","doi":"10.1002/cpe.70226","DOIUrl":"https://doi.org/10.1002/cpe.70226","url":null,"abstract":"<div>\u0000 \u0000 <p>Scheduling applications in vehicular edge computing (VEC) systems poses significant challenges due to strict timing constraints and varying levels of criticality. This paper presents a three-stage scheduling framework designed to efficiently manage the execution of mixed-criticality applications. The proposed method introduces scheduling policies that reduce the complexity of scheduling dual-criticality DAG (Directed Acyclic Graph) applications on servers by transforming them into equivalent uniprocessor scheduling problems. To further enhance performance, a population-based evolutionary algorithm is employed to optimize virtual machine configurations on each server, while a game-theoretic approach assigns DAG applications to servers. Experimental results show that the proposed scheme outperforms both state-of-the-art dynamic programming (DP) and particle swarm optimization (PSO) methods. The proposed MCS approach achieves a strong balance between scheduling quality and computational efficiency, with an <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>α</mi>\u0000 </mrow>\u0000 <annotation>$$ alpha $$</annotation>\u0000 </semantics></math> of 0.87, an 80% success rate, and a low computation time (310 s), making it well-suited for real-time edge systems. Compared to other methods like PSO+, DP, and OneVM, MCS offers near-optimal performance while avoiding the high computational cost and scalability limitations faced by those alternatives.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TAFLE: Task-Aware Flow Scheduling in Spine-Leaf Network via Hierarchical Auto-Associative Polynomial Reg Net","authors":"Vinu Josephraj, Wilfred Franklin Sundara Raj","doi":"10.1002/cpe.70167","DOIUrl":"https://doi.org/10.1002/cpe.70167","url":null,"abstract":"<div>\u0000 \u0000 <p>Cloud computing has become crucial to modern infrastructure, which enables data-intensive applications to thrive in scalable environments. The backbone of cloud computing is the massive data center (DC) servers. The DC networks have unique traffic demands for different tasks, which need to be considered for efficient network traffic (NT) management and enhancing Quality of Service (QoS). Existing solutions fail to consider these unique traffic demands, which result in suboptimal performance in large-scale, data-sensitive environments. To overcome these challenges, a novel Traffic-aware FLow reconfiguration in spine lEaf (TAFLE) system has been proposed in this paper. The proposed model addresses the inefficiencies of QoS-based network traffic allocation by considering the task-level requirements of data-sensitive applications. The proposed solution combines a Deep Packet Analytics (DPA) engine and the Hierarchical Auto-Associative Polynomial Reg Net (HAP-Reg Net) model for reconfiguring the flow based on QoS classes and predicted traffic volumes. Several criteria have been used to evaluate the proposed TAFLE model, such as the f1-score, accuracy, precision, recall, Mean Absolute Percentage Error (MAPE), and Mean Absolute Error (MAE). Experimental findings show that the system significantly improves prediction accuracy and resource allocation, which leads to better overall performance. Experimental results demonstrate that the existing techniques, such as CNN, LSTM, and GRU models, achieve 96.12%, 96.08%, and 95.65% accuracy, while the novel HAP-Reg Net model achieves 96.49% accuracy. Additionally, the proposed TAFLE model has a greater accuracy of 99.2% than previous methods like VAMBIG, AMFQ, and D-LSLP, which have 95.51%, 97.35%, and 98.89% accuracy, respectively.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on Optimization Algorithm and Application of Improved Northern Goshawk Based on Fusion Strategy","authors":"Xu Yong, Sang Bicong, Zhang Yi","doi":"10.1002/cpe.70136","DOIUrl":"https://doi.org/10.1002/cpe.70136","url":null,"abstract":"<p>Northern Goshawk Optimization Algorithm (NGO), as a new swarm intelligence optimization algorithm, shows certain potential in solving complex optimization problems because of its unique search mechanism. However, the algorithm still faces some challenges in practical application, such as slow convergence speed, insufficient optimization precision, and easy to fall into local optimality. These problems limit its application range and efficiency in complex multimodal optimization problems. To overcome the above shortcomings, this paper proposes an improved Northern Goshawk optimization algorithm (WNGO) based on a fusion strategy. The fusion strategy is a novel approach that combines the strengths of different optimization algorithms to address the problems of slow convergence speed, accuracy of optimization, and easy falling into local optimal. First, the Piecewise chaotic mapping is used to initialize the Northern Goshawk population, which enhances the global search capability of the algorithm by providing a wider search space in the initial stage. Second, in order to achieve the adequacy of the solution space search and the performance of the optimization problem in the prey recognition stage of the Northern Goshawk, the location update formula of the prey recognition stage of the Northern Goshawk is replaced by the location update formula of the Walrus optimization algorithm in the exploration stage. Then, through the mirror reverse learning strategy, the reverse solution generated by the lens imaging principle can provide a new search direction when the Northern Goshawk optimization algorithm falls into the local optimal, increase the probability of finding the global optimal solution, and improve the global optimization ability, so that it can jump out of the local optimal in the later iteration. Finally, the adaptive T-distribution variation strategy is used to enhance the local exploration ability in the late iteration, thus improving the convergence speed of the Northern Goshawk optimization algorithm. This paper evaluates the performance of the improved WNGO algorithm. By comparing the CEC2021 test function and other advanced improved swarm intelligence methods, it is proved that the improved algorithm has better accuracy, robustness, and convergence speed. It is tested in two engineering design problems. The results show that the WNGO algorithm can break through the local optimal solution, obtain higher precision, and have a stronger global searching ability than other algorithms.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70136","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contrastive Learning-Based Fine-Tuning Method for Cross-Modal Text-Image Retrieval","authors":"Wei Zhao, Xuan Ma, Weigang Wang","doi":"10.1002/cpe.70228","DOIUrl":"https://doi.org/10.1002/cpe.70228","url":null,"abstract":"<div>\u0000 \u0000 <p>With the rapid proliferation of social media and smart devices, multimodal data has grown explosively, making traditional unimodal retrieval methods insufficient for addressing cross-modal semantic correlation tasks. To tackle the challenges caused by text redundancy and image noise in real-world scenarios, this paper proposes a contrastive learning-based, two-stage progressive fine-tuning approach for building a high-precision text-image cross-modal retrieval system. We design an efficient data preprocessing pipeline: Text data undergoes tokenization, stop-word filtering, and TF-IDF-based keyword extraction, while image data is enhanced using Cutout-style random masking to improve robustness against occlusion and noise. The model employs a dual-tower architecture composed of a ResNet50 visual encoder and a RoBERTa-based text encoder, with joint embedding space optimized using InfoNCE loss. A Locked-image Tuning (LiT) strategy is introduced, where the visual encoder is initially frozen and then both encoders are fine-tuned jointly with mixed-precision training and gradient clipping to ensure convergence stability. To improve data loading efficiency, we utilize LMDB to store 50,000 image-text pairs, significantly reducing I/O overhead. Experiments on an industry-scale dataset demonstrate that the fine-tuned model achieves R@5 of 87.1% (text-to-image) and 87.4% (image-to-text), outperforming baselines by over 13% while reducing GPU memory usage by 18%. Our method achieves a balance between accuracy, efficiency, and scalability, making it suitable for applications such as social media content management and e-commerce cross-modal search.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual-Branch Cross-Scale Texture Feature Fusion for Low-Resolution Face Recognition","authors":"Jihua Ye, Wentao Geng, Tiantian Wang, Youcai Zou, Chao Wang, Zhan Xu, Aiwen Jiang","doi":"10.1002/cpe.70209","DOIUrl":"https://doi.org/10.1002/cpe.70209","url":null,"abstract":"<div>\u0000 \u0000 <p>Face images captured often suffer from low resolution and significant information loss. Traditional methods struggle to effectively extract local key features, leading to suboptimal recognition accuracy. To address these challenges, this paper introduces a novel approach based on dual-branch cross-scale texture feature fusion for low-resolution face recognition (DCSF-LR). The proposed method enhances the focus on facial details through local texture feature fusion and a dual-branch cross-scale attention module, enabling the extraction of richer facial features. Additionally, knowledge distillation is utilized to transfer knowledge from high-resolution face images to the low-resolution face recognition model. A newly designed loss function is introduced to facilitate effective knowledge transfer, better adapting the model to low-resolution face recognition tasks in uncontrolled environments. Moreover, a degradation module is developed to generate realistic low-resolution face images for training the student model, thereby improving its adaptability in real-world scenarios. Extensive experiments on the TinyFace and AgeDB-30 data sets demonstrate the effectiveness of the proposed method. It achieves 90.04% accuracy at <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>28</mn>\u0000 <mo>×</mo>\u0000 <mn>28</mn>\u0000 </mrow>\u0000 <annotation>$$ 28times 28 $$</annotation>\u0000 </semantics></math> resolution on AgeDB-30 and 57.73% (ACC@5) on TinyFace, surpassing existing methods in both accuracy and generalization.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing Node-Level Data Access Time Using Cluster-Based Deep Reinforcement Learning Models","authors":"Peerzada Hamid Ahmad, Munishwar Rai","doi":"10.1002/cpe.70232","DOIUrl":"https://doi.org/10.1002/cpe.70232","url":null,"abstract":"<div>\u0000 \u0000 <p>Distributed systems must have effective node-level data access to function optimally. Current techniques often suffer from delays and inaccurate information retrieval, necessitating novel strategies to maximize access duration. Although grouping extends the life of Wireless Sensor Networks (WSN) and saves energy, energy communication has not been thoroughly investigated in existing WSNs. This study proposes a unique Cluster-based Deep Reinforcement Learning (CDRL) approach to enhance information access speed at the node level. By grouping nodes according to connection structure and information accessibility patterns, the proposed CDRL model makes information organization and retrieval more effective. In the CDRL approach, neighboring nodes within a group select a suitable Cluster Head (CH) by monitoring environmental factors such as power consumption and proximity to the Base Station (BS). Each neighboring node chooses the best group based on minimizing energy usage and maximizing network lifespan. The CDRL method computes node weights based on movement and available battery power, with the node having the highest weight becoming the principal CH. When the CH's battery power depletes beyond a certain point, secondary clustering heads are chosen. This method reduces clustered management overhead and uses battery energy in a distributed manner, extending network life. The CH with the greatest reward point is chosen for transmitting information. The results indicate that combining reinforcement learning with cluster-based tactics significantly improves decentralized networks' responsiveness and effectiveness in information handling. Energy savings of 7.41%, 2.79%, 3.27%, and 4.03% are attained for deployed nodes of 100, 200, 300, and 400, respectively. The study shows that the CDRL method significantly decreases information access periods and routes packets faster than other methods.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SmartSecure: An Integrated Semantic Vulnerability Mining Framework for Ethereum Smart Contract","authors":"Vikas Kumar Jain, Meenakshi Tripathi","doi":"10.1002/cpe.70214","DOIUrl":"https://doi.org/10.1002/cpe.70214","url":null,"abstract":"<div>\u0000 \u0000 <p>Smart Contracts ensure trust through blockchain technology, streamline processes, and have disruptive potential across various industries. However, the issue of smart contract security cannot be underestimated. The vulnerability of smart contracts to exploitation has led to substantial losses, prompting increased attention toward vulnerability mining. Existing efforts for analyzing contract security heavily depend on inflexible rules set by experts, making them non-adaptable or scalable. Although various machine-learning methods have emerged for vulnerability mining in smart contracts, a research gap remains in effectively integrating diverse features of complex smart contracts with deep neural networks for enhanced detection. This paper presents SmartSecure, a vulnerability mining framework incorporating high-level semantic features extracted from contract source code. It provides in-depth local insights into vulnerabilities through contract property graphs that integrate abstract syntax trees, control flow graphs, and data dependency graphs, encompassing all syntactic and semantic aspects of the contract function. To fortify these features, we integrate them with low-level features derived from opcode sequences, encompassing global aspects. These diverse features are seamlessly fused and processed through a novel neural network design, resulting in a robust and effective solution. We evaluate our framework over 25,129 real-world smart contracts. Extensive experiments demonstrate the superiority of our method over existing tools and neural network-based approaches. It achieves an exceptional performance level of up to 97.6%, marking a significant step forward in smart contract security.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zehra Karapinar Senturk, Recep Guler, Yunus Ozcan, Mehmet Gamsizkan
{"title":"Automated Classification of Skin Diseases Using Microscopic Images: A Machine Learning Approach","authors":"Zehra Karapinar Senturk, Recep Guler, Yunus Ozcan, Mehmet Gamsizkan","doi":"10.1002/cpe.70220","DOIUrl":"https://doi.org/10.1002/cpe.70220","url":null,"abstract":"<p>This study presents a machine learning-based approach for the automated classification of skin diseases, specifically targeting morphea and lichen sclerosus, using microscopic images. The proposed method involves a systematic workflow, including image preprocessing techniques such as resizing, Reinhard normalization, Gaussian filtering, and CLAHE histogram equalization to enhance image quality. Feature extraction was performed using Gray-Level Co-occurrence Matrix (GLCM) and histogram-based statistical methods, capturing texture and intensity characteristics of skin tissues. Several classification models, including Support Vector Machine (SVM), Artificial Neural Network (ANN), Decision Tree (DT), Random Forest (RF), K-Nearest Neighbors (K-NN), and Logistic Regression (LR), were evaluated using accuracy, precision, recall, and F1 score, with hyperparameter optimization via grid search. The experimental results revealed that the combined feature set (GLCM + Histogram) achieved the highest performance, with the RF and K-NN models yielding a 100% in all performance metrics, including accuracy, sensitivity, recall, and F1-score. The study introduces a novel approach by examining these two diseases simultaneously, offering a reliable tool to support dermatologists with accurate and quick diagnoses. Future work will focus on expanding the dataset, exploring advanced deep learning techniques, and integrating clinical metadata to enhance model generalizability.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70220","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"C-CNPS: A Blockchain Sharding Network Model and Communication Strategy","authors":"Zhan Wen, Cheng Zhang, Wenzao Li, Yanhe Na, Dehao Ren","doi":"10.1002/cpe.70230","DOIUrl":"https://doi.org/10.1002/cpe.70230","url":null,"abstract":"<div>\u0000 \u0000 <p>Blockchain effectively addresses issues of trust and information asymmetry in distributed network architectures; however, its decentralization significantly constrains performance. Sharding is a viable solution to enhance blockchain scalability and improve network efficiency. In this context, this paper proposes a Sharding Algorithm Based on Coordinate Interval Distance(SACID), based on coordinate distance intervals, and develops a blockchain-sharded network communication model. This model partitions the blockchain network into multiple shards, enabling parallel information processing and enhancing scalability. Additionally, a Coordinate-based Network Path Selection Communication Strategy (CNPSC) is introduced to mitigate inter-shard communication overhead and consistency challenges, optimizing communication efficiency while maintaining security and decentralization. Furthermore, this paper incorporates the blockchain P2P core data transmission protocol Gossip, Dijkstra's algorithm, and the newly designed CNPSC communication strategy into the blockchain-shaded network communication model as data transmission protocols. Experimental analysis evaluates the robustness and communication efficiency of the proposed sharding model and CNPSC communication strategy. Results demonstrate that within the proposed blockchain-sharded network model, Gossip, Dijkstra, and CNPSC all exhibit strong robustness. Among them, the proposed CNPSC achieves the highest communication efficiency and delivers the best network performance.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MetaLP-DGI: Meta-Learning-Based Link Prediction With Centrality-Aware Deep Graph Infomax Embeddings","authors":"Fatima Ziya, Sanjay Kumar","doi":"10.1002/cpe.70211","DOIUrl":"https://doi.org/10.1002/cpe.70211","url":null,"abstract":"<div>\u0000 \u0000 <p>Link prediction is a fundamental task in social and complex network analysis, focused on forecasting the likelihood of unseen or future connections between nodes. Accurate link prediction can enhance understanding of network dynamics, reveal hidden structures, and improve recommendations in social and information networks. This paper proposes a novel Meta-Learning-Based Link Prediction model that utilizes a Centrality-Aware connectivity matrix and incorporates Deep Graph Infomax (DGI) embeddings with the CatBoost classifier. The connectivity matrix is constructed using node centrality measures like closeness centrality, degree centrality, and betweenness centrality by capturing the network's local and global structural properties. The DGI embedding algorithm efficiently learns the network's latent features, while the CatBoost classifier is employed to enhance prediction performance. To address the challenge of imbalanced datasets in social networks, we apply downsampling to create balanced training and testing datasets, ensuring robust model learning. Our framework demonstrates improved accuracy, scalability, and adaptability compared to traditional link prediction methods. Extensive experiments on real-world social network datasets show that the proposed model achieves superior performance in link prediction tasks, making it a promising approach for various network analysis applications.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144751393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}