{"title":"SAFE: EFFICIENT DDOS ATTACK DEFENSE WITH ELASTIC TRAFFIC FLOW INSPECTION IN SDN-BASED DATA CENTERS","authors":"Tri Gia Nguyen, Hai Hoang Nguyen, Trung V. Phan","doi":"10.15625/1813-9663/16629","DOIUrl":"https://doi.org/10.15625/1813-9663/16629","url":null,"abstract":"In this paper, we propose an efficient distributed denial-of-Service (DDoS) Attack deFEnse solution, namely SAFE, which utilizes an elastic traffic flow inspection mechanism, for Software-Defined Networking (SDN) based data centers. In particular, we first examine a leaf-spine SDN-based data center network, which is highly vulnerable to volumetric DDoS attacks. Next, we develop a rank-based anomaly detection algorithm to recognize anomalies in the amount of incoming traffic. Then, for the traffic flow inspection, we introduce a component called DFI (Deep Flow Inspection) running an Open vSwitch (OvS) that can be dynamically initiated (as a virtual machine) on-demand to collect traffic flow statistics. By utilizing deep reinforcement learning-based traffic monitoring from our previous study, the DFIs can be protected from the flow-table overflow problem while providing more detailed traffic flow information. Afterward, a machine learning-based attack detector analyzes the gathered flow rule statistics to identify the attack, and appropriate policies are implemented if an attack is recognized. The experiment results show that the SAFE can effectively defend against volumetric DDoS attacks while assuring a reliable Quality-of-Service level for benign traffic flows in SDN-based data center networks. Specifically, for TCP SYN and UDP floods, the SAFE attack detection performance is improved by approximately 40% and 30%, respectively, compared to the existing SATA solution. Furthermore, the attack mitigation performance, the ratio of dropped malicious packets obtained by the SAFE is superior by approximately 48% (for TCP SYN flood) and 52% (for UDP flood) to the SATA.","PeriodicalId":15444,"journal":{"name":"Journal of Computer Science and Cybernetics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88914757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. T. Tran, T. T. Nguyen, Giang Nguyen, Chau N. Truong
{"title":"PARALLEL FUZZY FREQUENT ITEMSET MINING USING CELLULAR AUTOMATA","authors":"T. T. Tran, T. T. Nguyen, Giang Nguyen, Chau N. Truong","doi":"10.15625/1813-9663/38/4/17462","DOIUrl":"https://doi.org/10.15625/1813-9663/38/4/17462","url":null,"abstract":"Finding frequent fuzzy itemsets in operational quantitative databases is a significant challenge for fuzzy association rule mining in the context of data mining. If frequent fuzzy itemsets are detected, the decision-making process and formulating strategies in businesses will be made more precise. Because the characteristic of these data models is a large number of transactions and unlimited and high-speed productions. This leads to limitations in calculating the support for itemsets containing fuzzy attributes. As a result, mining using parallel processing techniques has emerged as a potential solution to the issue of slow availability. This study presents a reinforced technique for mining frequent fuzzy sets based on cellular learning automata (CLA). The results demonstrate that frequent set mining can be accomplished with less running time when the proposed method is compared to iMFFP and NPSFF methods.","PeriodicalId":15444,"journal":{"name":"Journal of Computer Science and Cybernetics","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89291469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TWO-PHASE COMBINED MODEL TO IMPROVE THE ACCURACY OF INDOOR LOCATION FINGERPRINTING","authors":"Van-Hieu Vu, Binh Ngo-Van, Tung Hoang Do Thanh","doi":"10.15625/1813-9663/38/4/17592","DOIUrl":"https://doi.org/10.15625/1813-9663/38/4/17592","url":null,"abstract":"Wi-Fi Fingerprinting based Indoor Positioning System (IPS) aims to help locate and navigate users inside buildings. It has become a popular research topic in recent years. For the most parts, authors use the traditional machine learning algorithms to enhance the accuracy of locationing. Their methods involve using a standalone algorithm or a combination of different algorithms in only one phase, producing results with an acceptable accuracy. In this paper, we present a different approach applying a machine learning model that combines many algorithms in two phases, and propose a feature reduction method. Specifically, our research is focused on the combination of different regression and classification algorithms including K-Nearest Neighbor (KNN), Support Vector Machines (SVM), Random Forest (RF), Extra Tree Regressor (extraTree), Light Gradient Boosting Machine (LGBM), Logistic Regression (LR) and Linear Regression (LiR) to create a new data set and models that can be used in the training phase. These proposed models are tested on the UJIIndoorLoc 1 dataset. Our experimental results show a prediction accuracy of 98.73% by floor, and an estimated accuracy of 99.62% and 99.52% respectively by longitude and latitude. When compared with the results of the models in which we use independent algorithms, and of other researches that have different models using the same algorithms and on the same dataset, most of our results are better.","PeriodicalId":15444,"journal":{"name":"Journal of Computer Science and Cybernetics","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75431956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AN IN-DEPTH EVALUATION OF FREQUENCY-AWARE SCHEDULER FOR IMPROVING USER EXPERIENCE ON MOBILE DEVICES","authors":"Giang Son Tran, A. Carlier, D. Hagimont","doi":"10.15625/1813-9663/38/4/16873","DOIUrl":"https://doi.org/10.15625/1813-9663/38/4/16873","url":null,"abstract":"Mobile devices are more and more invading our daily life. Users of such devices expect to have a good experience, which is mainly linked with performance. However, higher performance also means a reduction in battery life, negatively contributing to the overall user experience. A common way to balance this performance-battery trade-off is to reduce CPU frequency when underload with Dynamic Voltage and Frequency Scaling. In our previous work, we introduced a Frequency-Aware Completely Fair Scheduler (called FA-CFS), which helps in reducing battery consumption and increasing the smoothness of mobile interface browsing. However, the current evaluation of FA-CFS model is only at quantitative results of power consumption rather than user experience on using their mobile device. In this paper, we perform an in-depth evaluation of the FA-CFS model, both quantitative results for system performance evaluation and qualitative results for user experience on mobile device usage. The experiments show that FA-CFS can reduce the rate of interface frame time peaks by up to 40% in terms of quantitative results and obtains a quantifiable impact on the quality of user experience with a quicker, more responsive interface.","PeriodicalId":15444,"journal":{"name":"Journal of Computer Science and Cybernetics","volume":"69 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90174323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FAST COMPUTATION OF DIRECT EXPONENTIATION TO SPEED UP IMPLEMENTATION OF DYNAMIC BLOCK CIPHERS","authors":"Luong Tran Thi","doi":"10.15625/1813-9663/38/4/17226","DOIUrl":"https://doi.org/10.15625/1813-9663/38/4/17226","url":null,"abstract":"MDS (maximum distance separable) matrices are ones that come from MDS codes that have been studied for a long time in error correcting code theory and have many applications in block ciphers. To improve the security of block ciphers, dynamic block ciphers can be created. Using MDS matrix transformations is a method used to make block ciphers dynamic. Direct exponentiation is a transformation that can be used to generate dynamic MDS matrices to create a dynamic diffusion layer of the block ciphers. However, for cryptographic algorithms that use an MDS matrix as a component of them, the implementation of matrix multiplication is quite expensive, especially when the matrix has a large size. In this paper, the mathematical basis for quick calculation of direct exponentiation of an MDS matrix will be presented. On that basis, it is to suggest how to apply that fast calculation to dynamic algorithms using the direct exponentiation. This result is very meaningful in software implementation for MDS matrices, especially when implementing dynamic block ciphers to increase execution speed.","PeriodicalId":15444,"journal":{"name":"Journal of Computer Science and Cybernetics","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90182440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Son, Vu Tri An, H. Vo, Pham Vu Minh, Nguyễn Quang Phúc, Nguyen Phi Le, B. Nguyen, Huynh Thi Thanh Binh
{"title":"EVOLUTIONARY ALGORITHM FOR TASK OFFLOADING IN VEHICULAR FOG COMPUTING","authors":"D. Son, Vu Tri An, H. Vo, Pham Vu Minh, Nguyễn Quang Phúc, Nguyen Phi Le, B. Nguyen, Huynh Thi Thanh Binh","doi":"10.15625/1813-9663/38/3/17012","DOIUrl":"https://doi.org/10.15625/1813-9663/38/3/17012","url":null,"abstract":"Internet of Things technology was introduced to allow many physical devices to connect over the Internet. The data and tasks generated by these devices put pressure on the traditional cloud due to high resource and latency demand. Vehicular Fog Computing (VFC) is a concept that utilizes the computational resources integrated into the vehicles to support the processing of end-user-generated tasks. This research first proposes a bag of tasks offloading framework that allows vehicles to handle multiple tasks and any given time step. We then implement an evolution-based algorithm called Time-Cost-aware Task-Node Mapping (TCaTNM) to optimize completion time and operating costs simultaneously. The proposed algorithm is evaluated on datasets of different tasks and computing node sizes. The results show that our scheduling algorithm can save more than $60%$ of monetary cost than the Particle Swarm Optimization (PSO) algorithm with competitive computation time. Further evaluations also show that our algorithm has a much faster learning rate and can scale its performance as the number of tasks and computing nodes increases.","PeriodicalId":15444,"journal":{"name":"Journal of Computer Science and Cybernetics","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81314401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EMPIRICAL STUDY OF FEATURE EXTRACTION APPROACHES FOR IMAGE CAPTIONING IN VIETNAMESE","authors":"Khang Nguyen","doi":"10.15625/1813-9663/38/4/17548","DOIUrl":"https://doi.org/10.15625/1813-9663/38/4/17548","url":null,"abstract":"Image captioning is a challenging task that is still being addressed in the 2020s. The problem has the input as an image, and the output is the generated caption that describes the context of the input image. In this study, I focus on the image captioning problem in Vietnamese. In detail, I present the empirical study of feature extraction approaches using current state-of-the-art object detection methods to represent the images in the model space. Each type of feature is trained with the Transformer-based captioning model. I investigate the effectiveness of different feature types on two Vietnamese datasets: UIT-ViIC and VieCap4H, the two standard benchmark datasets. The experimental results show crucial insight into the feature extraction task for image captioning in Vietnamese.","PeriodicalId":15444,"journal":{"name":"Journal of Computer Science and Cybernetics","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73844133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phạm Đình Phong, Nguyen Duc Du, Phạm Hoàng Hiệp, Trần Xuân Thành
{"title":"A HYBRID PSO-SA SCHEME FOR IMPROVING ACCURACY OF FUZZY TIME SERIES FORECASTING MODELS","authors":"Phạm Đình Phong, Nguyen Duc Du, Phạm Hoàng Hiệp, Trần Xuân Thành","doi":"10.15625/1813-9663/38/3/17424","DOIUrl":"https://doi.org/10.15625/1813-9663/38/3/17424","url":null,"abstract":"Forecasting methods based on fuzzy time series have been examined intensively during last years. Three main factors which affect the accuracy of those forecasting methods are length of intervals, the way of establishing fuzzy logical relationship groups, and defuzzification techniques. Many researches focus on optimizing length of intervals in order to improve forecasting accuracies by utilizing various optimization techniques. In the line of that research trend, in this paper, a hybrid particle swarm optimization combined with simulated annealing (PSO-SA) algorithm is proposed to optimize length of intervals to improve forecasting accuracies. The experimental results in comparison with the existing forecasting models show that the proposed forecasting model is an effective forecasting model.","PeriodicalId":15444,"journal":{"name":"Journal of Computer Science and Cybernetics","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80496781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AN EFFECTIVE ALGORITHM FOR COMPUTING REDUCTS IN DECISION TABLES","authors":"Do Si Truong, Lam Thanh Hien, N. Thanh Tung","doi":"10.15625/1813-9663/38/3/17450","DOIUrl":"https://doi.org/10.15625/1813-9663/38/3/17450","url":null,"abstract":"Attribute reduction is one important part researched in rough set theory. A reduct from a decision table is a minimal subset of the conditional attributes which provide the same information for classification purposes as the entire set of available attributes. The classification task for the high dimensional decision table could be solved faster if a reduct, instead of the original whole set of attributes, is used. In this paper, we propose a reduct computing algorithm using attribute clustering. The proposed algorithm works in three main stages. In the first stage, irrelevant attributes are eliminated. In the second stage relevant attributes are divided into appropriately selected number of clusters by Partitioning Around Medoids (PAM) clustering method integrated with a special metric in attribute space which is the normalized variation of information. In the third stage, the representative attribute from each cluster is selected that is the most class-related. The selected attributes form the approximate reduct. The proposed algorithm is implemented and experimented. The experimental results show that the proposed algorithm is capable of computing approximate reduct with small size and high classification accuracy, when the number of clusters used to group the attributes is appropriately selected.","PeriodicalId":15444,"journal":{"name":"Journal of Computer Science and Cybernetics","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88396764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"REVISITING SOME FUZZY ALGEBRAIC STRUCTURES","authors":"R. Kellil","doi":"10.15625/1813-9663/38/3/17039","DOIUrl":"https://doi.org/10.15625/1813-9663/38/3/17039","url":null,"abstract":"Following our investigations on some fuzzy algebraic structures started in [6--8], and [9], in the present work, we revisit fuzzy groups and fuzzy ideals and introduce some new examples and then define the notion of fuzzy relation modulo a fuzzy subgroup and modulo a fuzzy ideal. As a consequence, we introduce the right and left cosets modulo a fuzzy relation. This work and the previously cited ones can be considered as a continuation of investigations initiated in [1--5]. Toward our investigation, we have in mind that by introducing these new definitions, the results that we can get should represent generalization of classical and commonly known concepts of algebra.","PeriodicalId":15444,"journal":{"name":"Journal of Computer Science and Cybernetics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85305088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}