{"title":"A Wrapping Encryption Based on Double Randomness Mechanism","authors":"Yi-Li Huang, Fang-Yie Leu, Ruey-Kai Sheu, Jung-Chun Liu, Chi-Jan Huang","doi":"10.32604/cmc.2023.037161","DOIUrl":"https://doi.org/10.32604/cmc.2023.037161","url":null,"abstract":"Currently, data security mainly relies on password (<i>PW</i>) or system channel key (<i>SK</i><sub><i>CH</i></sub>) to encrypt data before they are sent, no matter whether in broadband networks, the 5th generation (5G) mobile communications, satellite communications, and so on. In these environments, a fixed password or channel key (e.g., <i>PW</i>/<i>SK</i><sub><i>CH</i></sub>) is often adopted to encrypt different data, resulting in security risks since this <i>PW</i>/<i>SK</i><sub><i>CH</i></sub> may be solved after hackers collect a huge amount of encrypted data. Actually, the most popularly used security mechanism Advanced Encryption Standard (AES) has its own problems, e.g., several rounds have been solved. On the other hand, if data protected by the same <i>PW</i>/<i>SK</i><sub><i>CH</i></sub> at different time points can derive different data encryption parameters, the system’s security level will be then greatly enhanced. Therefore, in this study, a security scheme, named Wrapping Encryption Based on Double Randomness Mechanism (WEBDR), is proposed by integrating a password key (or a system channel key) and an Initialization Vector (<i>IV</i>) to generate an Initial Encryption Key (<i>IEK</i>). Also, an Accumulated Shifting Substitution (<i>ASS</i>) function and a three-dimensional encryption method are adopted to produce a set of keys. Two randomness encryption mechanisms are developed. The first generates system sub-keys and calculates the length of the first pseudo-random numbers by employing <i>IEK</i> for providing subsequent encryption/decryption. The second produces a random encryption key and a sequence of internal feedback codes and computes the length of the second pseudo-random numbers for encrypting delivered messages. A wrapped mechanism is further utilized to pack a ciphertext file so that a wrapped ciphertext file, rather than the ciphertext, will be produced and then transmitted to its destination. The findings are as follows. Our theoretic analyses and simulations demonstrate that the security of the WEBDR in cloud communication has achieved its practical security. Also, AES requires 176 times exclusive OR (XOR) operations for both encryption and decryption, while the WEBDR consumes only 3 operations. That is why the WEBDR is 6.7~7.09 times faster than the AES, thus more suitable for replacing the AES to protect data transmitted between a cloud system and its users.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinliang Tang, Xiaotong Ru, Jingfang Su, Gabriel Adonis
{"title":"A Transmission and Transformation Fault Detection Algorithm Based on Improved YOLOv5","authors":"Xinliang Tang, Xiaotong Ru, Jingfang Su, Gabriel Adonis","doi":"10.32604/cmc.2023.038923","DOIUrl":"https://doi.org/10.32604/cmc.2023.038923","url":null,"abstract":"On the transmission line, the invasion of foreign objects such as kites, plastic bags, and balloons and the damage to electronic components are common transmission line faults. Detecting these faults is of great significance for the safe operation of power systems. Therefore, a YOLOv5 target detection method based on a deep convolution neural network is proposed. In this paper, Mobilenetv2 is used to replace Cross Stage Partial (CSP)-Darknet53 as the backbone. The structure uses depth-wise separable convolution toreduce the amount of calculation and parameters; improve the detection rate. At the same time, to compensate for the detection accuracy, the Squeeze-and-Excitation Networks (SENet) attention model is fused into the algorithm framework and a new detection scale suitable for small targets is added to improve the significance of the fault target area in the image. Collect pictures of foreign matters such as kites, plastic bags, balloons, and insulator defects of transmission lines, and sort them into a data set. The experimental results on datasets show that the mean Accuracy Precision (mAP) and recall rate of the algorithm can reach 92.1% and 92.4%, respectively. At the same time, by comparison, the detection accuracy of the proposed algorithm is higher than that of other methods.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136052440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Traffic Scene Captioning with Multi-Stage Feature Enhancement","authors":"Dehai Zhang, Yu Ma, Qing Liu, Haoxing Wang, Anquan Ren, Jiashu Liang","doi":"10.32604/cmc.2023.038264","DOIUrl":"https://doi.org/10.32604/cmc.2023.038264","url":null,"abstract":"Traffic scene captioning technology automatically generates one or more sentences to describe the content of traffic scenes by analyzing the content of the input traffic scene images, ensuring road safety while providing an important decision-making function for sustainable transportation. In order to provide a comprehensive and reasonable description of complex traffic scenes, a traffic scene semantic captioning model with multi-stage feature enhancement is proposed in this paper. In general, the model follows an encoder-decoder structure. First, multi-level granularity visual features are used for feature enhancement during the encoding process, which enables the model to learn more detailed content in the traffic scene image. Second, the scene knowledge graph is applied to the decoding process, and the semantic features provided by the scene knowledge graph are used to enhance the features learned by the decoder again, so that the model can learn the attributes of objects in the traffic scene and the relationships between objects to generate more reasonable captions. This paper reports extensive experiments on the challenging MS-COCO dataset, evaluated by five standard automatic evaluation metrics, and the results show that the proposed model has improved significantly in all metrics compared with the state-of-the-art methods, especially achieving a score of 129.0 on the CIDEr-D evaluation metric, which also indicates that the proposed model can effectively provide a more reasonable and comprehensive description of the traffic scene.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136052699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Fezan Afzal, Imran Khan, Javed Rashid, Mubbashar Saddique, Heba G. Mohamed
{"title":"Binary Oriented Feature Selection for Valid Product Derivation in Software Product Line","authors":"Muhammad Fezan Afzal, Imran Khan, Javed Rashid, Mubbashar Saddique, Heba G. Mohamed","doi":"10.32604/cmc.2023.041627","DOIUrl":"https://doi.org/10.32604/cmc.2023.041627","url":null,"abstract":"Software Product Line (SPL) is a group of software-intensive systems that share common and variable resources for developing a particular system. The feature model is a tree-type structure used to manage SPL’s common and variable features with their different relations and problem of Crosstree Constraints (CTC). CTC problems exist in groups of common and variable features among the sub-tree of feature models more diverse in Internet of Things (IoT) devices because different Internet devices and protocols are communicated. Therefore, managing the CTC problem to achieve valid product configuration in IoT-based SPL is more complex, time-consuming, and hard. However, the CTC problem needs to be considered in previously proposed approaches such as Commonality Variability Modeling of Features (COVAMOF) and Genarch + tool; therefore, invalid products are generated. This research has proposed a novel approach Binary Oriented Feature Selection Crosstree Constraints (BOFS-CTC), to find all possible valid products by selecting the features according to cardinality constraints and cross-tree constraint problems in the feature model of SPL. BOFS-CTC removes the invalid products at the early stage of feature selection for the product configuration. Furthermore, this research developed the BOFS-CTC algorithm and applied it to, IoT-based feature models. The findings of this research are that no relationship constraints and CTC violations occur and drive the valid feature product configurations for the application development by removing the invalid product configurations. The accuracy of BOFS-CTC is measured by the integration sampling technique, where different valid product configurations are compared with the product configurations derived by BOFS-CTC and found 100% correct. Using BOFS-CTC eliminates the testing cost and development effort of invalid SPL products.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136053670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vijey Thayananthan, Aiiad Albeshri, Hassan A. Alamri, Muhammad Bilal Qureshi, Muhammad Shuaib Qureshi
{"title":"A Survey on the Role of Complex Networks in IoT and Brain Communication","authors":"Vijey Thayananthan, Aiiad Albeshri, Hassan A. Alamri, Muhammad Bilal Qureshi, Muhammad Shuaib Qureshi","doi":"10.32604/cmc.2023.040184","DOIUrl":"https://doi.org/10.32604/cmc.2023.040184","url":null,"abstract":"Complex networks on the Internet of Things (IoT) and brain communication are the main focus of this paper. The benefits of complex networks may be applicable in the future research directions of 6G, photonic, IoT, brain, etc., communication technologies. Heavy data traffic, huge capacity, minimal level of dynamic latency, etc. are some of the future requirements in 5G+ and 6G communication systems. In emerging communication, technologies such as 5G+/6G-based photonic sensor communication and complex networks play an important role in improving future requirements of IoT and brain communication. In this paper, the state of the complex system considered as a complex network (the connection between the brain cells, neurons, etc.) needs measurement for analyzing the functions of the neurons during brain communication. Here, we measure the state of the complex system through observability. Using 5G+/6G-based photonic sensor nodes, finding observability influenced by the concept of contraction provides the stability of neurons. When IoT or any sensors fail to measure the state of the connectivity in the 5G+ or 6G communication due to external noise and attacks, some information about the sensor nodes during the communication will be lost. Similarly, neurons considered sing the complex networks concept neuron sensors in the brain lose communication and connections. Therefore, affected sensor nodes in a contraction are equivalent to compensate for maintaining stability conditions. In this compensation, loss of observability depends on the contraction size which is a key factor for employing a complex network. To analyze the observability recovery, we can use a contraction detection algorithm with complex network properties. Our survey paper shows that contraction size will allow us to improve the performance of brain communication, stability of neurons, etc., through the clustering coefficient considered in the contraction detection algorithm. In addition, we discuss the scalability of IoT communication using 5G+/6G-based photonic technology.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136054169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shengyao Sun, Ying Du, Jiajun Chen, Xuan Zhang, Jiwei Zhang, Yiyi Xu
{"title":"A New Partial Task Offloading Method in a Cooperation Mode under Multi-Constraints for Multi-UE","authors":"Shengyao Sun, Ying Du, Jiajun Chen, Xuan Zhang, Jiwei Zhang, Yiyi Xu","doi":"10.32604/cmc.2023.037483","DOIUrl":"https://doi.org/10.32604/cmc.2023.037483","url":null,"abstract":"In Multi-access Edge Computing (MEC), to deal with multiple user equipment (UE)’s task offloading problem of parallel relationships under the multi-constraints, this paper proposes a cooperation partial task offloading method (named CPMM), aiming to reduce UE's energy and computation consumption, while meeting the task completion delay as much as possible. CPMM first studies the task offloading of single-UE and then considers the task offloading of multi-UE based on single-UE task offloading. CPMM uses the critical path algorithm to divide the modules into key and non-key modules. According to some constraints of UE-self when offloading tasks, it gives priority to non-key modules for offloading and uses the evaluation decision method to select some appropriate key modules for offloading. Based on fully considering the competition between multiple UEs for communication resources and MEC service resources, CPMM uses the weighted queuing method to alleviate the competition for communication resources and uses the branch decision algorithm to determine the location of module offloading by BS according to the MEC servers’ resources. It achieves its goal by selecting reasonable modules to offload and using the cooperation of UE, MEC, and Cloud Center to determine the execution location of the modules. Extensive experiments demonstrate that CPMM obtains superior performances in task computation consumption reducing around 6% on average, task completion delay reducing around 5% on average, and better task execution success rate than other similar methods.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136052432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Air Defense Weapon Target Assignment Method Based on Multi-Objective Artificial Bee Colony Algorithm","authors":"Huaixi Xing, Qinghua Xing","doi":"10.32604/cmc.2023.036223","DOIUrl":"https://doi.org/10.32604/cmc.2023.036223","url":null,"abstract":"With the advancement of combat equipment technology and combat concepts, new requirements have been put forward for air defense operations during a group target attack. To achieve high-efficiency and low-loss defensive operations, a reasonable air defense weapon assignment strategy is a key step. In this paper, a multi-objective and multi-constraints weapon target assignment (WTA) model is established that aims to minimize the defensive resource loss, minimize total weapon consumption, and minimize the target residual effectiveness. An optimization framework of air defense weapon mission scheduling based on the multi-objective artificial bee colony (MOABC) algorithm is proposed. The solution for point-to-point saturated attack targets at different operational scales is achieved by encoding the nectar with real numbers. Simulations are performed for an imagined air defense scenario, where air defense weapons are saturated. The non-dominated solution sets are obtained by the MOABC algorithm to meet the operational demand. In the case where there are more weapons than targets, more diverse assignment schemes can be selected. According to the inverse generation distance (IGD) index, the convergence and diversity for the solutions ofthe non-dominated sorting genetic algorithm III (NSGA-III) algorithm and the MOABC algorithm are compared and analyzed. The results prove that the MOABC algorithm has better convergence and the solutions are more evenly distributed among the solution space.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136052694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Spider Monkey Optimization Algorithm Combining Opposition-Based Learning and Orthogonal Experimental Design","authors":"Weizhi Liao, Xiaoyun Xia, Xiaojun Jia, Shigen Shen, Helin Zhuang, Xianchao Zhang","doi":"10.32604/cmc.2023.040967","DOIUrl":"https://doi.org/10.32604/cmc.2023.040967","url":null,"abstract":"As a new bionic algorithm, Spider Monkey Optimization (SMO) has been widely used in various complex optimization problems in recent years. However, the new space exploration power of SMO is limited and the diversity of the population in SMO is not abundant. Thus, this paper focuses on how to reconstruct SMO to improve its performance, and a novel spider monkey optimization algorithm with opposition-based learning and orthogonal experimental design (SMO<sup>3</sup>) is developed. A position updating method based on the historical optimal domain and particle swarm for Local Leader Phase (LLP) and Global Leader Phase (GLP) is presented to improve the diversity of the population of SMO. Moreover, an opposition-based learning strategy based on self-extremum is proposed to avoid suffering from premature convergence and getting stuck at locally optimal values. Also, a local worst individual elimination method based on orthogonal experimental design is used for helping the SMO algorithm eliminate the poor individuals in time. Furthermore, an extended SMO<sup>3</sup> named CSMO<sup>3</sup> is investigated to deal with constrained optimization problems. The proposed algorithm is applied to both unconstrained and constrained functions which include the CEC2006 benchmark set and three engineering problems. Experimental results show that the performance of the proposed algorithm is better than three well-known SMO algorithms and other evolutionary algorithms in unconstrained and constrained problems.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136052696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Text Extraction with Optimal Bi-LSTM","authors":"Bahera H. Nayef, Siti Norul Huda Sheikh Abdullah, Rossilawati Sulaiman, Ashwaq Mukred Saeed","doi":"10.32604/cmc.2023.039528","DOIUrl":"https://doi.org/10.32604/cmc.2023.039528","url":null,"abstract":"Text extraction from images using the traditional techniques of image collecting, and pattern recognition using machine learning consume time due to the amount of extracted features from the images. Deep Neural Networks introduce effective solutions to extract text features from images using a few techniques and the ability to train large datasets of images with significant results. This study proposes using Dual Maxpooling and concatenating convolution Neural Networks (CNN) layers with the activation functions Relu and the Optimized Leaky Relu (OLRelu). The proposed method works by dividing the word image into slices that contain characters. Then pass them to deep learning layers to extract feature maps and reform the predicted words. Bidirectional Short Memory (BiLSTM) layers extract more compelling features and link the time sequence from forward and backward directions during the training phase. The Connectionist Temporal Classification (CTC) function calcifies the training and validation loss rates. In addition to decoding the extracted feature to reform characters again and linking them according to their time sequence. The proposed model performance is evaluated using training and validation loss errors on the Mjsynth and Integrated Argument Mining Tasks (IAM) datasets. The result of IAM was 2.09% for the average loss errors with the proposed dual Maxpooling and OLRelu. In the Mjsynth dataset, the best validation loss rate shrunk to 2.2% by applying concatenating CNN layers, and Relu.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136053005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S M Hasan Mahmud, Md Mamun Ali, Mohammad Fahim Shahriar, Fahad Ahmed Al-Zahrani, Kawsar Ahmed, Dip Nandi, Francis M. Bui
{"title":"Detection of Different Stages of Alzheimer’s Disease Using CNN Classifier","authors":"S M Hasan Mahmud, Md Mamun Ali, Mohammad Fahim Shahriar, Fahad Ahmed Al-Zahrani, Kawsar Ahmed, Dip Nandi, Francis M. Bui","doi":"10.32604/cmc.2023.039020","DOIUrl":"https://doi.org/10.32604/cmc.2023.039020","url":null,"abstract":"Alzheimer’s disease (AD) is a neurodevelopmental impairment that results in a person’s behavior, thinking, and memory loss. The most common symptoms of AD are losing memory and early aging. In addition to these, there are several serious impacts of AD. However, the impact of AD can be mitigated by early-stage detection though it cannot be cured permanently. Early-stage detection is the most challenging task for controlling and mitigating the impact of AD. The study proposes a predictive model to detect AD in the initial phase based on machine learning and a deep learning approach to address the issue. To build a predictive model, open-source data was collected where five stages of images of AD were available as Cognitive Normal (CN), Early Mild Cognitive Impairment (EMCI), Mild Cognitive Impairment (MCI), Late Mild Cognitive Impairment (LMCI), and AD. Every stage of AD is considered as a class, and then the dataset was divided into three parts binary class, three class, and five class. In this research, we applied different preprocessing steps with augmentation techniques to efficiently identify AD. It integrates a random oversampling technique to handle the imbalance problem from target classes, mitigating the model overfitting and biases. Then three machine learning classifiers, such as random forest (RF), K-Nearest neighbor (KNN), and support vector machine (SVM), and two deep learning methods, such as convolutional neuronal network (CNN) and artificial neural network (ANN) were applied on these datasets. After analyzing the performance of the used models and the datasets, it is found that CNN with binary class outperformed 88.20% accuracy. The result of the study indicates that the model is highly potential to detect AD in the initial phase.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136053007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}