{"title":"Android malware defense through a hybrid multi-modal approach","authors":"Asmitha K.A. , Vinod P. , Rafidha Rehiman K.A. , Neeraj Raveendran , Mauro Conti","doi":"10.1016/j.jnca.2024.104035","DOIUrl":"10.1016/j.jnca.2024.104035","url":null,"abstract":"<div><div>The rapid proliferation of Android apps has given rise to a dark side, where increasingly sophisticated malware poses a formidable challenge for detection. To combat this evolving threat, we present an explainable hybrid multi-modal framework. This framework leverages the power of deep learning, with a novel model fusion technique, to illuminate the hidden characteristics of malicious apps. Our approach combines models (leveraging late fusion approach) trained on attributes derived from static and dynamic analysis, hence utilizing the unique strengths of each model. We thoroughly analyze individual feature categories, feature ensembles, and model fusion using traditional machine learning classifiers and deep neural networks across diverse datasets. Our hybrid fused model outperforms others, achieving an F1-score of 99.97% on CICMaldroid2020. We use SHAP (SHapley Additive exPlanations) and t-SNE (t-distributed Stochastic Neighbor Embedding) to further analyze and interpret the best-performing model. We highlight the efficacy of our architectural design through an ablation study, revealing that our approach consistently achieves over 99% detection accuracy across multiple deep learning models. This paves the way groundwork for substantial advancements in security and risk mitigation within interconnected Android OS environments.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104035"},"PeriodicalIF":7.7,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142433589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance enhancement of artificial intelligence: A survey","authors":"Moez Krichen , Mohamed S. Abdalzaher","doi":"10.1016/j.jnca.2024.104034","DOIUrl":"10.1016/j.jnca.2024.104034","url":null,"abstract":"<div><div>The advent of machine learning (ML) and Artificial intelligence (AI) has brought about a significant transformation across multiple industries, as it has facilitated the automation of jobs, extraction of valuable insights from extensive datasets, and facilitation of sophisticated decision-making processes. Nevertheless, optimizing efficiency has become a critical research field due to AI systems’ increasing complexity and resource requirements. This paper provides an extensive examination of several techniques and methodologies aimed at improving the efficiency of ML and artificial intelligence. In this study, we investigate many areas of research about AI. These areas include algorithmic improvements, hardware acceleration techniques, data pretreatment methods, model compression approaches, distributed computing frameworks, energy-efficient strategies, fundamental concepts related to AI, AI efficiency evaluation, and formal methodologies. Furthermore, we engage in an examination of the obstacles and prospective avenues in this particular domain. This paper offers a deep analysis of many subjects to equip researchers and practitioners with sufficient strategies to enhance efficiency within ML and AI systems. More particularly, the paper provides an extensive analysis of efficiency-enhancing techniques across multiple dimensions: algorithmic advancements, hardware acceleration, data processing, model compression, distributed computing, and energy consumption.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104034"},"PeriodicalIF":7.7,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142326905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reducing cold start delay in serverless computing using lightweight virtual machines","authors":"Amirmohammad Karamzadeh, Alireza Shameli-Sendi","doi":"10.1016/j.jnca.2024.104030","DOIUrl":"10.1016/j.jnca.2024.104030","url":null,"abstract":"<div><div>In recent years, serverless computing has gained considerable attention in academic, professional, and business circles. Unique features such as code development flexibility and the cost-efficient pay-as-you-go pricing model have led to predictions of widespread adoption of serverless services. Major players in the cloud computing sector, including industry giants like Amazon, Google, and Microsoft, have made significant advancements in the field of serverless services. However, cloud computing faces complex challenges, with two prominent ones being the latency caused by cold start instances and security vulnerabilities associated with container escapes. These challenges undermine the smooth execution of isolated functions, a concern amplified by technologies like Google gVisor and Kata Containers. While the integration of tools like lightweight virtual machines has alleviated concerns about container escape vulnerabilities, the primary issue remains the increased delay experienced during cold start instances in the execution of serverless functions. The purpose of this research is to propose an architecture that reduces cold start delay overhead by utilizing lightweight virtual machines within a commercial architecture, thereby achieving a setup that closely resembles real-world scenarios. This research employs supervised learning methodologies to predict function invocations by leveraging the execution patterns of other program functions. The goal is to proactively mitigate cold start scenarios by invoking the target function before actual user initiation, effectively transitioning from cold starts to warm starts. In this study, we compared our approach with two fixed and variable window strategies. Commercial platforms like Knative, OpenFaaS, and OpenWhisk typically employ a fixed 15-minute window during cold starts. In contrast to these platforms, our approach demonstrated a significant reduction in cold start incidents. Specifically, when calling a function 200 times with 5, 10, and 20 invocations within one hour, our approach achieved reductions in cold starts by 83.33%, 92.13%, and 90.90%, respectively. Compared to the variable window approach, which adjusts the window based on cold start values, our proposed approach was able to prevent 82.92%, 91.66%, and 90.56% of cold starts for the same scenario. These results highlight the effectiveness of our approach in significantly reducing cold starts, thereby enhancing the performance and responsiveness of serverless functions. Our method outperformed both fixed and variable window strategies, making it a valuable contribution to the field of serverless computing. Additionally, the implementation of pre-invocation strategies to convert cold starts into warm starts results in a substantial reduction in the execution time of functions within lightweight virtual machines.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104030"},"PeriodicalIF":7.7,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142419861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A cooperative task assignment framework with minimum cooperation cost in crowdsourcing systems","authors":"Bo Yin, Zeshu Ai, Jun Lu, Ying Feng","doi":"10.1016/j.jnca.2024.104033","DOIUrl":"10.1016/j.jnca.2024.104033","url":null,"abstract":"<div><div>Crowdsourcing provides a new problem-solving paradigm that utilizes the intelligence of crowds to solve computer-hard problems. Task assignment is a foundation problem in crowdsourcing systems and applications. However, existing task assignment approaches often assume that workers operate independently. In reality, worker cooperation is necessary. In this paper, we address the cooperative task assignment (CTA) problem where a worker needs to pay a monetary cost to another worker in exchange for cooperation. Cooperative working also requires one task to be assigned to more than one worker to ensure the reliability of crowdsourcing services. We formalize the CTA problem with the goal of minimizing the total cooperation cost of all workers under the workload limitation of each worker. The challenge is that the individual cooperation cost that a worker pays for a specific task highly depends on the task distribution. This increases the difficulty of obtaining the assignment instance with a small cooperation cost. We prove that the CTA problem is NP-hard. We propose a two-stage cooperative task assignment framework that first assigns each task to one worker and then makes duplicate assignments. We also present solutions to address the dynamic scenarios. Extensive experimental results show that the proposed framework can effectively reduce the cooperation cost.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104033"},"PeriodicalIF":7.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Node and relevant data selection in distributed predictive analytics: A query-centric approach","authors":"Tahani Aladwani , Christos Anagnostopoulos , Kostas Kolomvatsos","doi":"10.1016/j.jnca.2024.104029","DOIUrl":"10.1016/j.jnca.2024.104029","url":null,"abstract":"<div><div>Distributed Predictive Analytics (DPA) refers to constructing predictive models based on data distributed across nodes. DPA reduces the need for data centralization, thus, alleviating concerns about data privacy, decreasing the load on central servers, and minimizing communication overhead. However, data collected by nodes are inherently different; each node can have different distributions, volumes, access patterns, and features space. This heterogeneity hinders the development of accurate models in a distributed fashion. Many state-of-the-art methods adopt random node selection as a straightforward approach. Such method is particularly ineffective when dealing with data and access pattern heterogeneity, as it increases the likelihood of selecting nodes with low-quality or irrelevant data for DPA. Consequently, it is only after training models over randomly selected nodes that the most suitable ones can be identified based on the predictive performance. This results in more time and resource consumption, and increased network load. In this work, holistic knowledge of nodes’ data characteristics and access patterns is crucial. Such knowledge enables the successful selection of a subset of suitable nodes for each DPA task (query) before model training. Our method engages the most suitable nodes by predicting their relevant distributed data and learning predictive models <em>per</em> query. We introduce a novel DPA query-centric mechanism for node and relevant data selection. We contribute with (i) predictive selection mechanisms based on the availability and relevance of data per DPA query and (ii) various distributed machine learning mechanisms that engage the most suitable nodes for model training. We evaluate the efficiency of our mechanism and provide a comparative assessment with other methods found in the literature. Our experiments showcase that our mechanism significantly outperforms other approaches being applicable in DPA.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104029"},"PeriodicalIF":7.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distributed enhanced multi-objective evolutionary algorithm based on decomposition for cluster analysis in wireless sensor network","authors":"Anita Panwar, Satyasai Jagannath Nanda","doi":"10.1016/j.jnca.2024.104032","DOIUrl":"10.1016/j.jnca.2024.104032","url":null,"abstract":"<div><p>Conventional clustering algorithms do not recognize patterns and structures with contradicting objectives in large, distributed datasets. Distributed clustering leverages rapid processing capabilities to allow multiple nodes to work together. This paper proposes a Distributed clustering based on Multiobjective Evolutionary Algorithm by Decomposition (D-MOEA/d) to solve various multiobjective optimization problems in wireless sensor networks (WSNs). In MOEA/d, a multiobjective optimization problem decomposes into several scalar optimization subproblems, each focusing on a distinct objective. Each subproblem is expressed as a clustering problem that uses local data to perform distributed clustering. The proposed method has been extended to achieve improved accuracy in less time by using a smaller feature subset with less redundancy. The Distributed Enhanced MOEA/d (DE-MOEA/d) avoids local optima by achieving diversity in the population using fuzzy-based nearest neighbor selection, sparse population initialization, and evolved mutation operator. This integration improves the accuracy of the clustering process at WSN nodes, ensuring the attainment of well-balanced solutions across multiple optimization criteria in the distributed environment. Average Euclidean and total symmetrical deviations are the two cost functions used to minimize while clustering on the MOEA/d framework. Six real-life WSN datasets are used to assess the performance of the proposed technique: (1) the Delhi air pollution dataset, (2) the Canada weather station dataset, (3) the Thames River water quality dataset, (4) the Narragansett Bay water quality dataset, (5) the Cook Agricultural land dataset and 6) Gordon Soil dataset. The simulation results of both proposed algorithms are compared with Multiobjective distributed particle swarm optimization (DMOPSO) and Distributed K-means (DK-Means). The proposed algorithm DE-MOEA/d performs better in terms of the Silhouette index (SI), Dunn index (DI), Davies–Bouldin index (DBI), and Kruskal–Wallis (KW) statistical test.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104032"},"PeriodicalIF":7.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Security risks and countermeasures of adversarial attacks on AI-driven applications in 6G networks: A survey","authors":"Van-Tam Hoang , Yared Abera Ergu , Van-Linh Nguyen , Rong-Guey Chang","doi":"10.1016/j.jnca.2024.104031","DOIUrl":"10.1016/j.jnca.2024.104031","url":null,"abstract":"<div><p>The advent of sixth-generation (6G) networks is expected to start a new era in mobile networks, characterized by unprecedented high demands on dense connectivity, ultra-reliability, low latency, and high throughput. Artificial intelligence (AI) is at the forefront of this progress, optimizing and enabling intelligence for essential 6G functions such as radio resource allocation, slicing, service offloading, and mobility management. However, AI is subject to a wide range of security risks, most notably adversarial attacks. Recent studies, inspired by computer vision and natural language processing, show that adversarial attacks have significantly reduced performance and caused incorrect decisions in wireless communications, jeopardizing the perspective of transforming AI-based 6G core networks. This survey presents a thorough investigation into the landscape of adversarial attacks and defenses in the realm of AI-powered functions within classic wireless networks, open radio access networks (O-RAN), and 6G networks. Two key findings are as follows. First, by leveraging shared wireless networks, attackers can provide noise perturbation or signal sampling for interference, resulting in misclassification in AI-based channel estimation and signal classification. From these basic weaknesses, 6G introduces new threat vectors from AI-based core functionalities, such as malicious agents in federated learning-based service offloading and adversarial attacks on O-RAN near-real-time RIC (xApp). Second, adversarial training, trustworthy mmWave/Terahertz datasets, adversarial anomaly detection, and quantum technologies for adversarial defenses are the most promising strategies for mitigating the negative effects of the attacks. This survey also identifies possible future research topics for adversarial attacks and countermeasures in 6G AI-enabled technologies.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104031"},"PeriodicalIF":7.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An AKA protocol for 5G-assisted D2D communication in Out-of-Coverage scenario","authors":"Ponjit Borgohain, Hiten Choudhury","doi":"10.1016/j.jnca.2024.104025","DOIUrl":"10.1016/j.jnca.2024.104025","url":null,"abstract":"<div><p>5G-assisted Device to Device (D2D) communication can be broadly categorized into three use case scenarios: In Coverage, Relay Coverage, and Out-of Coverage. The main challenge lies in ensuring secure communication in Out-of Coverage scenarios, as in this situation, neither of the two devices is within the 5G network’s coverage area. Although several researchers have developed authentication mechanisms for securing D2D communication, most are unsuitable for Out-of Coverage scenarios. Additionally, many of these mechanisms cannot withstand free-riding attacks due to the absence of a trusted entity. In a 5G cellular network, a trust relationship can be established between a registered device and the home network through mutual authentication whenever the former is within the latter’s coverage area. Leveraging this trust, this paper proposes a lightweight 5G-assisted authentication protocol for mutual authentication between two communicating devices in Out-of Coverage scenarios. The proposed protocol meets the necessary security goals and mitigates various security attacks, including free-riding attacks. The correctness of the proposed protocol is formally established using the Scyther tool and Random Oracle Model. Furthermore, performance analysis shows that the protocol is efficient in terms of computation overhead, communication overhead and energy consumption compared to similar works in the literature. The computation overhead is found to be 596.12 ms for the Requestor device and 587.26 ms for the Requestee device. The communication overhead is 2720 bits and the total energy consumption for both devices combined is found to be 3016.93 millijoules.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104025"},"PeriodicalIF":7.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Eloy Espozo-Espinoza , Manuel Fernández-Veiga , Francisco Troncoso-Pastoriza
{"title":"Generalized hierarchical coded caching","authors":"Juan Eloy Espozo-Espinoza , Manuel Fernández-Veiga , Francisco Troncoso-Pastoriza","doi":"10.1016/j.jnca.2024.104027","DOIUrl":"10.1016/j.jnca.2024.104027","url":null,"abstract":"<div><p>Optimizing data traffic is a key concern in content distribution networks to reduce the bandwidth needed to serve the requested content to the final users. In this context, hierarchical coded caching has been proposed as an effective method for reducing traffic. The typical two-level scenario consists in a tree-like structure: on the first layer, a set of intermediate nodes or helpers with local caches store and serve content to a subset of users at the second layer, who also have their local caches. A central server partially stores its content between the first- and second-layer caches, in uncoded form, and subsequently transmits coded messages to satisfy the files demanded by the users. This topology has been thoroughly analyzed in the literature over the past years; however, little attention has been put in arbitrary topologies, where users might be connected to multiple helper nodes, which is especially interesting for scenarios such as wireless networks. Thus, in this paper we study a hierarchical two-level coded caching system with an arbitrary topology and propose a novel content distribution method for this generalized model that leverages the multiple connections between users and helpers to improve the performance of the distribution process. We give achievability results for this system and compare its performance with other classical single-layer and two-layer coded caching systems. Our results show that further gains are possible by using two levels of caching in different operating regimes.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104027"},"PeriodicalIF":7.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xueshu Hong , Xingkong Ma , Shaoyong Li , Yiqing Cai , Bo Liu
{"title":"SSBM: A spatially separated boxes-based multi-tab website fingerprinting model","authors":"Xueshu Hong , Xingkong Ma , Shaoyong Li , Yiqing Cai , Bo Liu","doi":"10.1016/j.jnca.2024.104023","DOIUrl":"10.1016/j.jnca.2024.104023","url":null,"abstract":"<div><p>In recent years, the website fingerprinting (WF) attack against the Tor anonymity system has become a hot research issue. The state-of-the-art WF studies have shown that the detection accuracy of websites is up to more than 95%. However, they are mainly conducted under the single-tab assumption, where each sample contains only one website traffic. The single-tab setting could not be realistic because users often open multiple tabs to browse simultaneously. The requests and responses from multiple tabs will overlap and interfere with each other, destroying existing single-tab WF attacks. In addition, the proposed multi-tab WF attack works poorly when traffic overlaps seriously. It remains challenging to implement WF attacks in multi-tab scenarios. This paper investigates a new spatial separated boxes-based multi-tab website fingerprinting model, called SSBM, to solve the multi-tab WF problem. It is an end-to-end model that separates traffic by equal-sized boxes and extracts features with convolutional neural networks. By predicting the label of each box, the tabs of the whole traffic are inferred. We design and implement SSBM and compare it with state-of-the-art multi-tab WF attacks in two different multi-tab modes: overlapping mode and delayed mode. In the overlapping mode, SSBM can successfully identify 81.24% of the first tab and 64.72% of the second tab when the overlapping proportions of the two tabs’ traffic reaches 50%, which are 4% and 29% higher than the current strongest BAPM. In the delayed mode, when the second tab traffic starts to overlap with the first tab traffic with a 5-second delay, SSBM improves the first tab’s classification accuracy from 60% to 69% and the second tab’s detection rates from 33% to 53%. Moreover, SSBM achieves the highest improvement, nearly 40%, in the three-tab evaluations. The experimental results show that SSBM outperforms existing multi-tab WF attack methods.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104023"},"PeriodicalIF":7.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}