Muhammad Fezan Afzal, Imran Khan, Javed Rashid, Mubbashar Saddique, Heba G. Mohamed
{"title":"Binary Oriented Feature Selection for Valid Product Derivation in Software Product Line","authors":"Muhammad Fezan Afzal, Imran Khan, Javed Rashid, Mubbashar Saddique, Heba G. Mohamed","doi":"10.32604/cmc.2023.041627","DOIUrl":"https://doi.org/10.32604/cmc.2023.041627","url":null,"abstract":"Software Product Line (SPL) is a group of software-intensive systems that share common and variable resources for developing a particular system. The feature model is a tree-type structure used to manage SPL’s common and variable features with their different relations and problem of Crosstree Constraints (CTC). CTC problems exist in groups of common and variable features among the sub-tree of feature models more diverse in Internet of Things (IoT) devices because different Internet devices and protocols are communicated. Therefore, managing the CTC problem to achieve valid product configuration in IoT-based SPL is more complex, time-consuming, and hard. However, the CTC problem needs to be considered in previously proposed approaches such as Commonality Variability Modeling of Features (COVAMOF) and Genarch + tool; therefore, invalid products are generated. This research has proposed a novel approach Binary Oriented Feature Selection Crosstree Constraints (BOFS-CTC), to find all possible valid products by selecting the features according to cardinality constraints and cross-tree constraint problems in the feature model of SPL. BOFS-CTC removes the invalid products at the early stage of feature selection for the product configuration. Furthermore, this research developed the BOFS-CTC algorithm and applied it to, IoT-based feature models. The findings of this research are that no relationship constraints and CTC violations occur and drive the valid feature product configurations for the application development by removing the invalid product configurations. The accuracy of BOFS-CTC is measured by the integration sampling technique, where different valid product configurations are compared with the product configurations derived by BOFS-CTC and found 100% correct. Using BOFS-CTC eliminates the testing cost and development effort of invalid SPL products.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136053670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Effective Runge-Kutta Optimizer Based on Adaptive Population Size and Search Step Size","authors":"Ala Kana, Imtiaz Ahmad","doi":"10.32604/cmc.2023.040775","DOIUrl":"https://doi.org/10.32604/cmc.2023.040775","url":null,"abstract":"A newly proposed competent population-based optimization algorithm called RUN, which uses the principle of slope variations calculated by applying the Runge Kutta method as the key search mechanism, has gained wider interest in solving optimization problems. However, in high-dimensional problems, the search capabilities, convergence speed, and runtime of RUN deteriorate. This work aims at filling this gap by proposing an improved variant of the RUN algorithm called the Adaptive-RUN. Population size plays a vital role in both runtime efficiency and optimization effectiveness of metaheuristic algorithms. Unlike the original RUN where population size is fixed throughout the search process, Adaptive-RUN automatically adjusts population size according to two population size adaptation techniques, which are linear staircase reduction and iterative halving, during the search process to achieve a good balance between exploration and exploitation characteristics. In addition, the proposed methodology employs an adaptive search step size technique to determine a better solution in the early stages of evolution to improve the solution quality, fitness, and convergence speed of the original RUN. Adaptive-RUN performance is analyzed over 23 IEEE CEC-2017 benchmark functions for two cases, where the first one applies linear staircase reduction with adaptive search step size (LSRUN), and the second one applies iterative halving with adaptive search step size (HRUN), with the original RUN. To promote green computing, the carbon footprint metric is included in the performance evaluation in addition to runtime and fitness. Simulation results based on the Friedman and Wilcoxon tests revealed that Adaptive-RUN can produce high-quality solutions with lower runtime and carbon footprint values as compared to the original RUN and three recent metaheuristics. Therefore, with its higher computation efficiency, Adaptive-RUN is a much more favorable choice as compared to RUN in time stringent applications.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136053960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vijey Thayananthan, Aiiad Albeshri, Hassan A. Alamri, Muhammad Bilal Qureshi, Muhammad Shuaib Qureshi
{"title":"A Survey on the Role of Complex Networks in IoT and Brain Communication","authors":"Vijey Thayananthan, Aiiad Albeshri, Hassan A. Alamri, Muhammad Bilal Qureshi, Muhammad Shuaib Qureshi","doi":"10.32604/cmc.2023.040184","DOIUrl":"https://doi.org/10.32604/cmc.2023.040184","url":null,"abstract":"Complex networks on the Internet of Things (IoT) and brain communication are the main focus of this paper. The benefits of complex networks may be applicable in the future research directions of 6G, photonic, IoT, brain, etc., communication technologies. Heavy data traffic, huge capacity, minimal level of dynamic latency, etc. are some of the future requirements in 5G+ and 6G communication systems. In emerging communication, technologies such as 5G+/6G-based photonic sensor communication and complex networks play an important role in improving future requirements of IoT and brain communication. In this paper, the state of the complex system considered as a complex network (the connection between the brain cells, neurons, etc.) needs measurement for analyzing the functions of the neurons during brain communication. Here, we measure the state of the complex system through observability. Using 5G+/6G-based photonic sensor nodes, finding observability influenced by the concept of contraction provides the stability of neurons. When IoT or any sensors fail to measure the state of the connectivity in the 5G+ or 6G communication due to external noise and attacks, some information about the sensor nodes during the communication will be lost. Similarly, neurons considered sing the complex networks concept neuron sensors in the brain lose communication and connections. Therefore, affected sensor nodes in a contraction are equivalent to compensate for maintaining stability conditions. In this compensation, loss of observability depends on the contraction size which is a key factor for employing a complex network. To analyze the observability recovery, we can use a contraction detection algorithm with complex network properties. Our survey paper shows that contraction size will allow us to improve the performance of brain communication, stability of neurons, etc., through the clustering coefficient considered in the contraction detection algorithm. In addition, we discuss the scalability of IoT communication using 5G+/6G-based photonic technology.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136054169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shengyao Sun, Ying Du, Jiajun Chen, Xuan Zhang, Jiwei Zhang, Yiyi Xu
{"title":"A New Partial Task Offloading Method in a Cooperation Mode under Multi-Constraints for Multi-UE","authors":"Shengyao Sun, Ying Du, Jiajun Chen, Xuan Zhang, Jiwei Zhang, Yiyi Xu","doi":"10.32604/cmc.2023.037483","DOIUrl":"https://doi.org/10.32604/cmc.2023.037483","url":null,"abstract":"In Multi-access Edge Computing (MEC), to deal with multiple user equipment (UE)’s task offloading problem of parallel relationships under the multi-constraints, this paper proposes a cooperation partial task offloading method (named CPMM), aiming to reduce UE's energy and computation consumption, while meeting the task completion delay as much as possible. CPMM first studies the task offloading of single-UE and then considers the task offloading of multi-UE based on single-UE task offloading. CPMM uses the critical path algorithm to divide the modules into key and non-key modules. According to some constraints of UE-self when offloading tasks, it gives priority to non-key modules for offloading and uses the evaluation decision method to select some appropriate key modules for offloading. Based on fully considering the competition between multiple UEs for communication resources and MEC service resources, CPMM uses the weighted queuing method to alleviate the competition for communication resources and uses the branch decision algorithm to determine the location of module offloading by BS according to the MEC servers’ resources. It achieves its goal by selecting reasonable modules to offload and using the cooperation of UE, MEC, and Cloud Center to determine the execution location of the modules. Extensive experiments demonstrate that CPMM obtains superior performances in task computation consumption reducing around 6% on average, task completion delay reducing around 5% on average, and better task execution success rate than other similar methods.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136052432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Air Defense Weapon Target Assignment Method Based on Multi-Objective Artificial Bee Colony Algorithm","authors":"Huaixi Xing, Qinghua Xing","doi":"10.32604/cmc.2023.036223","DOIUrl":"https://doi.org/10.32604/cmc.2023.036223","url":null,"abstract":"With the advancement of combat equipment technology and combat concepts, new requirements have been put forward for air defense operations during a group target attack. To achieve high-efficiency and low-loss defensive operations, a reasonable air defense weapon assignment strategy is a key step. In this paper, a multi-objective and multi-constraints weapon target assignment (WTA) model is established that aims to minimize the defensive resource loss, minimize total weapon consumption, and minimize the target residual effectiveness. An optimization framework of air defense weapon mission scheduling based on the multi-objective artificial bee colony (MOABC) algorithm is proposed. The solution for point-to-point saturated attack targets at different operational scales is achieved by encoding the nectar with real numbers. Simulations are performed for an imagined air defense scenario, where air defense weapons are saturated. The non-dominated solution sets are obtained by the MOABC algorithm to meet the operational demand. In the case where there are more weapons than targets, more diverse assignment schemes can be selected. According to the inverse generation distance (IGD) index, the convergence and diversity for the solutions ofthe non-dominated sorting genetic algorithm III (NSGA-III) algorithm and the MOABC algorithm are compared and analyzed. The results prove that the MOABC algorithm has better convergence and the solutions are more evenly distributed among the solution space.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136052694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Spider Monkey Optimization Algorithm Combining Opposition-Based Learning and Orthogonal Experimental Design","authors":"Weizhi Liao, Xiaoyun Xia, Xiaojun Jia, Shigen Shen, Helin Zhuang, Xianchao Zhang","doi":"10.32604/cmc.2023.040967","DOIUrl":"https://doi.org/10.32604/cmc.2023.040967","url":null,"abstract":"As a new bionic algorithm, Spider Monkey Optimization (SMO) has been widely used in various complex optimization problems in recent years. However, the new space exploration power of SMO is limited and the diversity of the population in SMO is not abundant. Thus, this paper focuses on how to reconstruct SMO to improve its performance, and a novel spider monkey optimization algorithm with opposition-based learning and orthogonal experimental design (SMO<sup>3</sup>) is developed. A position updating method based on the historical optimal domain and particle swarm for Local Leader Phase (LLP) and Global Leader Phase (GLP) is presented to improve the diversity of the population of SMO. Moreover, an opposition-based learning strategy based on self-extremum is proposed to avoid suffering from premature convergence and getting stuck at locally optimal values. Also, a local worst individual elimination method based on orthogonal experimental design is used for helping the SMO algorithm eliminate the poor individuals in time. Furthermore, an extended SMO<sup>3</sup> named CSMO<sup>3</sup> is investigated to deal with constrained optimization problems. The proposed algorithm is applied to both unconstrained and constrained functions which include the CEC2006 benchmark set and three engineering problems. Experimental results show that the performance of the proposed algorithm is better than three well-known SMO algorithms and other evolutionary algorithms in unconstrained and constrained problems.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136052696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Text Extraction with Optimal Bi-LSTM","authors":"Bahera H. Nayef, Siti Norul Huda Sheikh Abdullah, Rossilawati Sulaiman, Ashwaq Mukred Saeed","doi":"10.32604/cmc.2023.039528","DOIUrl":"https://doi.org/10.32604/cmc.2023.039528","url":null,"abstract":"Text extraction from images using the traditional techniques of image collecting, and pattern recognition using machine learning consume time due to the amount of extracted features from the images. Deep Neural Networks introduce effective solutions to extract text features from images using a few techniques and the ability to train large datasets of images with significant results. This study proposes using Dual Maxpooling and concatenating convolution Neural Networks (CNN) layers with the activation functions Relu and the Optimized Leaky Relu (OLRelu). The proposed method works by dividing the word image into slices that contain characters. Then pass them to deep learning layers to extract feature maps and reform the predicted words. Bidirectional Short Memory (BiLSTM) layers extract more compelling features and link the time sequence from forward and backward directions during the training phase. The Connectionist Temporal Classification (CTC) function calcifies the training and validation loss rates. In addition to decoding the extracted feature to reform characters again and linking them according to their time sequence. The proposed model performance is evaluated using training and validation loss errors on the Mjsynth and Integrated Argument Mining Tasks (IAM) datasets. The result of IAM was 2.09% for the average loss errors with the proposed dual Maxpooling and OLRelu. In the Mjsynth dataset, the best validation loss rate shrunk to 2.2% by applying concatenating CNN layers, and Relu.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136053005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S M Hasan Mahmud, Md Mamun Ali, Mohammad Fahim Shahriar, Fahad Ahmed Al-Zahrani, Kawsar Ahmed, Dip Nandi, Francis M. Bui
{"title":"Detection of Different Stages of Alzheimer’s Disease Using CNN Classifier","authors":"S M Hasan Mahmud, Md Mamun Ali, Mohammad Fahim Shahriar, Fahad Ahmed Al-Zahrani, Kawsar Ahmed, Dip Nandi, Francis M. Bui","doi":"10.32604/cmc.2023.039020","DOIUrl":"https://doi.org/10.32604/cmc.2023.039020","url":null,"abstract":"Alzheimer’s disease (AD) is a neurodevelopmental impairment that results in a person’s behavior, thinking, and memory loss. The most common symptoms of AD are losing memory and early aging. In addition to these, there are several serious impacts of AD. However, the impact of AD can be mitigated by early-stage detection though it cannot be cured permanently. Early-stage detection is the most challenging task for controlling and mitigating the impact of AD. The study proposes a predictive model to detect AD in the initial phase based on machine learning and a deep learning approach to address the issue. To build a predictive model, open-source data was collected where five stages of images of AD were available as Cognitive Normal (CN), Early Mild Cognitive Impairment (EMCI), Mild Cognitive Impairment (MCI), Late Mild Cognitive Impairment (LMCI), and AD. Every stage of AD is considered as a class, and then the dataset was divided into three parts binary class, three class, and five class. In this research, we applied different preprocessing steps with augmentation techniques to efficiently identify AD. It integrates a random oversampling technique to handle the imbalance problem from target classes, mitigating the model overfitting and biases. Then three machine learning classifiers, such as random forest (RF), K-Nearest neighbor (KNN), and support vector machine (SVM), and two deep learning methods, such as convolutional neuronal network (CNN) and artificial neural network (ANN) were applied on these datasets. After analyzing the performance of the used models and the datasets, it is found that CNN with binary class outperformed 88.20% accuracy. The result of the study indicates that the model is highly potential to detect AD in the initial phase.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136053007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}