Transactions on Emerging Telecommunications Technologies最新文献

筛选
英文 中文
GIJA:Enhanced geyser-inspired Jaya algorithm for task scheduling optimization in cloud computing GIJA:云计算任务调度优化的增强型间歇泉启发 Jaya 算法
IF 2.5 4区 计算机科学
Transactions on Emerging Telecommunications Technologies Pub Date : 2024-07-10 DOI: 10.1002/ett.5019
Laith Abualigah, Ahmad MohdAziz Hussein, Mohammad H. Almomani, Raed Abu Zitar, Mohammad Sh. Daoud, Hazem Migdady, Ahmed Ibrahim Alzahrani, Ayed Alwadain
{"title":"GIJA:Enhanced geyser-inspired Jaya algorithm for task scheduling optimization in cloud computing","authors":"Laith Abualigah,&nbsp;Ahmad MohdAziz Hussein,&nbsp;Mohammad H. Almomani,&nbsp;Raed Abu Zitar,&nbsp;Mohammad Sh. Daoud,&nbsp;Hazem Migdady,&nbsp;Ahmed Ibrahim Alzahrani,&nbsp;Ayed Alwadain","doi":"10.1002/ett.5019","DOIUrl":"https://doi.org/10.1002/ett.5019","url":null,"abstract":"<p>Task scheduling optimization plays a pivotal role in enhancing the efficiency and performance of cloud computing systems. In this article, we introduce GIJA (Geyser-inspired Jaya Algorithm), a novel optimization approach tailored for task scheduling in cloud computing environments. GIJA integrates the principles of the Geyser-inspired algorithm with the Jaya algorithm, augmented by a Levy Flight mechanism, to address the complexities of task scheduling optimization. The motivation for this research stems from the increasing demand for efficient resource utilization and task management in cloud computing, driven by the proliferation of Internet of Things (IoT) devices and the growing reliance on cloud-based services. Traditional task scheduling algorithms often face challenges in handling dynamic workloads, heterogeneous resources, and varying performance objectives, necessitating innovative optimization techniques. GIJA leverages the eruptive dynamics of geysers, inspired by nature's efficiency in channeling resources, to guide task scheduling decisions. By combining this Geyser-inspired approach with the simplicity and effectiveness of the Jaya algorithm, GIJA offers a robust optimization framework capable of adapting to diverse cloud computing environments. Additionally, the integration of the Levy Flight mechanism introduces stochasticity into the optimization process, enabling the exploration of solution spaces and accelerating convergence. To evaluate the efficacy of GIJA, extensive experiments are conducted using synthetic and real-world datasets representative of cloud computing workloads. Comparative analyses against existing task scheduling algorithms, including AOA, RSA, DMOA, PDOA, LPO, SCO, GIA, and GIAA, demonstrate the superior performance of GIJA in terms of solution quality, convergence rate, diversity, and robustness. The findings of GIJA provide a promising solution quality for addressing the complexities of task scheduling in cloud environments (95%), with implications for enhancing system performance, scalability, and resource utilization.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141584086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of novel intrusion detection in Internet of Things using improved dart game optimizer-derived optimal cascaded ensemble learning 利用改进的飞镖游戏优化器衍生的优化级联集合学习,开发物联网中的新型入侵检测系统
IF 2.5 4区 计算机科学
Transactions on Emerging Telecommunications Technologies Pub Date : 2024-07-10 DOI: 10.1002/ett.5018
A. Shali, Dr. A. Chinnasamy, P. Selvakumari
{"title":"Development of novel intrusion detection in Internet of Things using improved dart game optimizer-derived optimal cascaded ensemble learning","authors":"A. Shali,&nbsp;Dr. A. Chinnasamy,&nbsp;P. Selvakumari","doi":"10.1002/ett.5018","DOIUrl":"https://doi.org/10.1002/ett.5018","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background of the Study</h3>\u0000 \u0000 <p>Internet of things (IoT) industry has accelerated its development with the support of advanced information technology and economic expansion. A complete industrial foundation includes software, chips, electronic components, IoT services, integrated systems, machinery, and telecom operators, which the gradual improvement in the IoT industry system has formulated. As the exponential growth of IoT devices increases, the attack surface available to cybercriminals enables them to carry out potentially more damaging operations. As a result, the security sector has witnessed a rise in cyberattacks. Hackers use several methods to copy and modify the information in the IoT environment. Machine learning techniques are used by the intrusion detection (ID) model to determine and categorize attacks in IoT networks.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Objectives</h3>\u0000 \u0000 <p>Thus, this study explores the ID system with the heuristic-assisted deep learning approaches for effectively detect the attacks in the IoT. At first, the IoT data are garnered in benchmark resources. Then, the gathered data is preprocessed to perform data cleaning. Next, the data is transformed and fed to the feature extraction stage. The feature extraction is performed with the help of one-dimensional convolutional neural network (1D-CNN), where the features are extracted from the target-based pooling layer. Then, these attained deep features are fed to the ID phase, where the cascaded ensemble learning (CEL) approach is adopted for detecting the intrusions. Here, the hyperparameter tuning is done with a new suggested improved darts game optimizer (IDGO) algorithm. Here, the main objective of the developed algorithm helps to maximize accuracy in ID.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Findings</h3>\u0000 \u0000 <p>Throughout the experimental findings, the developed model provides 86% of accuracy. Thus, the finding of the developed model shows less detecting time and higher detection efficiency.</p>\u0000 </section>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141584085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HRF-ExGB: Hybrid random forest-extreme gradient boosting for mobile edge computing HRF-ExGB:用于移动边缘计算的混合随机森林-极梯度提升技术
IF 2.5 4区 计算机科学
Transactions on Emerging Telecommunications Technologies Pub Date : 2024-07-06 DOI: 10.1002/ett.5010
Muthukrishnan Anuradha, John Jean Justus, Kaliyaperumal Vijayalakshmi, JK Periasamy
{"title":"HRF-ExGB: Hybrid random forest-extreme gradient boosting for mobile edge computing","authors":"Muthukrishnan Anuradha,&nbsp;John Jean Justus,&nbsp;Kaliyaperumal Vijayalakshmi,&nbsp;JK Periasamy","doi":"10.1002/ett.5010","DOIUrl":"https://doi.org/10.1002/ett.5010","url":null,"abstract":"<p>The development of increasingly cutting-edge mobile apps like augmented reality, facial recognition, and natural language processing has been facilitated by the sharp rise in smartphone demand. The increased use of mobile devices like wireless sensors and wearable technology has led to a rapid increase in mobile applications. Due to the explosive growth of the Internet and distributed computing resources of edge devices in mobile edge computing (MEC), it is necessary to have a suitable controller to ensure effective utilization of distributed computing resources. However, the existing approaches can lead to more computation time, more consumption of energy, and a lack of security issues. To overcome these issues, this paper proposed a novel approach called Hybrid Random Forest-Extreme Gradient Boosting (HRF-XGBoost) to enhance the computation offloading and joint resource allocation predictions. In a wireless-powered multiuser MEC system, the starling murmuration optimization model is utilized to figure out the ideal task offloading options. XGBoost is combined with a random forest classifier to form an HRF-XGBoost architecture which is used to speed up the process while preserving the user's device's battery. An offloading method is created employing certain processes once the best computation offloading decision for Mobile Users (MUs) has been established. The experiment result shows that the method reduced system overhead and time complexity using the strategy of selecting fewer tasks alone by optimally eliminating other tasks. It optimizes the execution time even when the mobile user increases. The performance of the overall system can be greatly improved by our model compared to other existing techniques.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141556641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum-safe Lattice-based mutual authentication and key-exchange scheme for the smart grid 用于智能电网的基于网格的量子安全相互验证和密钥交换方案
IF 2.5 4区 计算机科学
Transactions on Emerging Telecommunications Technologies Pub Date : 2024-07-02 DOI: 10.1002/ett.5017
Hema Shekhawat, Daya Sagar Gupta
{"title":"Quantum-safe Lattice-based mutual authentication and key-exchange scheme for the smart grid","authors":"Hema Shekhawat,&nbsp;Daya Sagar Gupta","doi":"10.1002/ett.5017","DOIUrl":"10.1002/ett.5017","url":null,"abstract":"<p>The smart grid network (SGN) is expected to leverage advances in the Internet of Things (IoT) to enable effective delivery and monitoring of energy. By integrating communication, computing, and information tools like smart sensors and meters to facilitate the process of monitoring, predictions, and management of power usage, the SGN can improve competence of power-grid architecture. However, the effective deployment of IoT-powered SGNs hinges on the deployment of strong security protocols. With the advent of quantum computers, classic cryptographic algorithms based on integer factorization and the Diffie-Hellman assumptions may not be suitable to secure the sensitive data of SGNs. Therefore, in this paper, a secure quantum-safe mutual authentication and key-exchange (MAKe) mechanism is proposed for SGNs, that make use of the hard assumptions of small integer solution and inhomogeneous small integer solution problems of lattice. The proposed protocol is intended to offer confidentiality, anonymity, and hashed-based mutual authentication with a key-exchange agreement. Similarly, this scheme allows creation and validation of the mutual trust among the smart-meters (SMs) and neighbourhood-area network gateway over an insecure wireless channel. A random oracle model is then used to perform the formal security analysis of the proposed approach. A thorough formal analysis demonstrates proposed algorithm's ability to withstand various known attacks. The performance analysis shows that the proposed approach outperforms other comparative schemes with respect to at least 22.07% of minimal energy utilization, 51.48% effective storage and communications costs, as well as 76.28% computational costs, and thus suitable for resource-constrained SGNs.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-graph representation spatio-temporal attention networks for traffic forecasting in the cinematic metaverse 用于电影元宇宙流量预测的多图表示时空注意力网络
IF 2.5 4区 计算机科学
Transactions on Emerging Telecommunications Technologies Pub Date : 2024-07-02 DOI: 10.1002/ett.5020
Ke Li, Xiaoming He, Yinqiu Liu, Meng Chen
{"title":"Multi-graph representation spatio-temporal attention networks for traffic forecasting in the cinematic metaverse","authors":"Ke Li,&nbsp;Xiaoming He,&nbsp;Yinqiu Liu,&nbsp;Meng Chen","doi":"10.1002/ett.5020","DOIUrl":"10.1002/ett.5020","url":null,"abstract":"<p>The cinematic metaverse aims to create a virtual space with the context of a film. Users can enter this space in the form of avatars, experiencing the cinematic plot firsthand in an immersive manner. This requires us to design a rational computation resource allocation and synchronization algorithm to meet the demands of multi-objective joint optimization, such as low latency and high throughput, which ensures that users can seamlessly switch between virtual and real worlds and acquire immersive experiences. Unfortunately, the explosive growth in the number of users makes it difficult to jointly optimize multiple objectives. Predicting traffic generated by the users' avatars in the cinematic metaverse is significant for the optimization process. Although graph neural networks-based traffic prediction models achieve superior prediction accuracy, these methods rely only on physical distances-based topological graph information, while failing to comprehensively reflect the real relationships between avatars in the cinematic metaverse. To address this issue, we present a novel Multi-Graph Representation Spatio-Temporal Attention Networks (MGRSTANet) for traffic prediction in the cinematic metaverse. Specifically, based on multiple topological graph information (e.g., physical distances, centerity, and similarity), we first design Multi-Graph Embedding (MGE) module to generate multiple graph representations, thus reflecting on the real relationships between avatars more comprehensively. The Spatio-Temporal Attention (STAtt) module is then proposed to extract spatio-temporal correlations in each graph representations, thus improving prediction accuracy. We conduct simulation experiments to evaluate the effectiveness of MGRSTANet. The experimental results demonstrate that our proposed model outperforms the state-of-the-art baselines in terms of prediction accuracy, making it appropriate for traffic forecasting in the cinematic metaverse.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardware software SoC co-design analysis and implementation of MIMO-OFDM for 4G/5G/6G eNodeB applications 面向 4G/5G/6G eNodeB 应用的 MIMO-OFDM 硬件软件 SoC 协同设计分析与实施
IF 2.5 4区 计算机科学
Transactions on Emerging Telecommunications Technologies Pub Date : 2024-06-26 DOI: 10.1002/ett.5012
Sanket N. Dessai, Hemant Patidar
{"title":"Hardware software SoC co-design analysis and implementation of MIMO-OFDM for 4G/5G/6G eNodeB applications","authors":"Sanket N. Dessai,&nbsp;Hemant Patidar","doi":"10.1002/ett.5012","DOIUrl":"https://doi.org/10.1002/ett.5012","url":null,"abstract":"<p>With the emerging challenges for the data rate requirements of 5G/6G applications and reusing the 4G infrastructure for 5G, it is necessary to understand the System-on-Chip (SoC) platform-based embedded co-design and implementation of the programmable and reconfigurable MIMO-OFDM system. For both uplink and downlink data transmissions, these applications require a larger data throughput as well as reduced bit error rates, latency, and increased spectral efficiency. This work describes the co-design and development of hardware and software for the MIMO-OFDM algorithms for 5G and 6G eNodeBs. An efficient design through computer architecture based on pipeline and parallelization using systolic and CORDIC has been applied for the IP development of the sub-components of the MIMO-OFDM systems. A Zynq platform with computing resources including PS, Mali GPU-400, and PL is utilized to increase the data rate for MIMO-OFDM system architecture co-design and implementation. The architecture approach used in this work enabled a data rate of 10–50 Gbps and beyond reaching Tbps based on the system's programmability and reconfigurability with an efficient SoC platform design. The design platform provides a programming feature such as MIMO-OFDM, OFDM, and MIMO without OFDM through software programming for the range of applications of the desired data rates. With 64-QAM modulation, the three channels' observed performance in the predicted multipath channel velocity of 15 km/h for pedestrians, vehicles, and AWGN is seen in simulation. To reach the application clock frequencies, the device's PLL (ZUI7EG) upscales and downscales clock frequencies from 750 to 1600 MHz using a configurable register. When the system is configured to operate as MIMO-OFDM or OFDM in order to get an execution throughput of 300 msec and a data throughput ranging from 71 Gbps to 1749 Gbps using 2 × 2/4 × 4 configurations. The device scalability depends on at present devices of advanced embedded reconfigurable architecture platform. Massive MIMO and multi-user MIMO will be used in the future to increase throughput and data rates. Additionally, future work will focus on creating a MIMO-OFDM hardware-software embedded architecture and testbed to enhance implementation and verification of the vehicle and pedestrian.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A spatial-temporal graph gated transformer for traffic forecasting 用于交通预测的时空图门控转换器
IF 2.5 4区 计算机科学
Transactions on Emerging Telecommunications Technologies Pub Date : 2024-06-26 DOI: 10.1002/ett.5021
Haroun Bouchemoukha, Mohamed Nadjib Zennir, Ahmed Alioua
{"title":"A spatial-temporal graph gated transformer for traffic forecasting","authors":"Haroun Bouchemoukha,&nbsp;Mohamed Nadjib Zennir,&nbsp;Ahmed Alioua","doi":"10.1002/ett.5021","DOIUrl":"https://doi.org/10.1002/ett.5021","url":null,"abstract":"<p>Accurate traffic forecasting is more necessary than ever for transportation departments, especially given its significant role in traffic planning, management, and control. However, most existing methods struggle to address complex spatial correlations on road networks, nonlinear temporal dynamics, and difficult long-term prediction. This article proposes a novel spatial temporal graph gated transformer (STGGT) to overcome these challenges. The suggested model differs from Google's transformer because it uses a hybrid architecture that integrates graph convolutional networks (GCNs), attention, and gated recurrent units (GRUs) instead of solely relying on attention. Specifically, STGGT uses GCNs to extract spatial dependencies, utilizes attention and GRUs to extract temporal dependencies, and handle long-term prediction. Experiments indicate that STGGT outperforms the state-of-the-art baseline models on two real-world traffic datasets of 9%–40%. The proposed model offers a promising solution for accurate traffic forecasting, simultaneously addressing the challenges of complex spatial correlations, nonlinear temporal dynamics, and long-term prediction.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRL-based customised resource allocation for sub-slices in 6G network slicing 6G 网络切片中基于 DRL 的子切片定制资源分配
IF 2.5 4区 计算机科学
Transactions on Emerging Telecommunications Technologies Pub Date : 2024-06-26 DOI: 10.1002/ett.5016
Meignanamoorthi D, Vetriselvi V
{"title":"DRL-based customised resource allocation for sub-slices in 6G network slicing","authors":"Meignanamoorthi D,&nbsp;Vetriselvi V","doi":"10.1002/ett.5016","DOIUrl":"https://doi.org/10.1002/ett.5016","url":null,"abstract":"<p>6G network services demand significant computer resources. Network slicing offers a potential solution by enabling customized services on shared infrastructure. However, dynamic service needs in heterogeneous environments pose challenges to resource provisioning. 6G applications like extended reality and connected vehicles require service differentiation for optimal quality of experience (QoE). Granular resource allocation within slices is a complex issue. To address the complexity of QoE services in dynamic slicing, a deep reinforcement learning (DRL) approach called customized sub-slicing is proposed. This approach involves splitting access, transport, and core slices into sub-slices to handle service differentiation among 6G applications. The focus is on creating sub-slices and dynamically scaling slices for intelligent resource allocation and reallocation based on QoS requirements for each sub-slice. The problem is formulated as an integer linear programming (ILP) optimization problem with real-world constraints. To effectively allocate sub-slices and dynamically scale resources, the Advantage Actor-Critic (A2C)-based Network Sub-slice Allocation and Optimization (NS-AO) algorithm is proposed. Experimental results demonstrate that the proposed algorithm outperforms the state of the art in terms of training stability, learning time, sub-slice acceptance rate, and resilience to topology changes.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PAM: Predictive analytics and modules-based computation offloading framework using greedy heuristics and 5G NR-V2X PAM:利用贪婪启发法和 5G NR-V2X 的预测分析和基于模块的计算卸载框架
IF 2.5 4区 计算机科学
Transactions on Emerging Telecommunications Technologies Pub Date : 2024-06-25 DOI: 10.1002/ett.5003
Muhammad Ilyas Khattak, Hui Yuan, Ayaz Ahmad, Manzoor Ahmed, Ajmal Khan,  Inamullah
{"title":"PAM: Predictive analytics and modules-based computation offloading framework using greedy heuristics and 5G NR-V2X","authors":"Muhammad Ilyas Khattak,&nbsp;Hui Yuan,&nbsp;Ayaz Ahmad,&nbsp;Manzoor Ahmed,&nbsp;Ajmal Khan,&nbsp; Inamullah","doi":"10.1002/ett.5003","DOIUrl":"https://doi.org/10.1002/ett.5003","url":null,"abstract":"<p>Recent advancements in distributed computing systems have shown promising prospects in enabling the effective usage of many next-generation applications. These applications include a wide range of fields, such as healthcare, interactive gaming, video streaming, and other related technologies. Among such solutions are the evolving vehicular fog computing (VFC) frameworks that make use of IEEE and 3GPP protocols and use advanced optimization algorithms. However, these approaches often rely on outdated protocols or computationally intensive mathematical techniques for solving or representing their optimization models. Additionally, some of these frameworks have not thoroughly considered the type of application during their evaluation and validation phases. In response to these challenges, we have developed the “predictive analytics and modules” (PAM) framework, which operates on a time and event-driven basis. It utilizes up-to-date 3GPP protocols to address the inherent unpredictability of VFC-enabled distributed computing systems required in smart healthcare systems. Through a combination of a greedy heuristic approach and a distributed offloading architecture, PAM efficiently optimizes decisions related to task offloading and computation allocation. This is achieved through specialized algorithms that provide support to computationally weaker devices, all within a time frame of under 100 ms. To assess the performance of PAM in comparison to three benchmark methodologies, the evaluation pathways that we employed are average response time, probability density function, pareto-analysis, algorithmic run time, and algorithmic complexity.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Less sample-cooperative spectrum sensing against large-scale Byzantine attack in cognitive wireless sensor networks 认知型无线传感器网络中对抗大规模拜占庭攻击的少样本合作频谱感知
IF 2.5 4区 计算机科学
Transactions on Emerging Telecommunications Technologies Pub Date : 2024-06-24 DOI: 10.1002/ett.5015
Jun Wu, Mingyuan Dai, Xuyang Teng, Chao Wu, Meilin He, Haoyu Liang, Hao Chen, Ye Jin, Weiwei Cao
{"title":"Less sample-cooperative spectrum sensing against large-scale Byzantine attack in cognitive wireless sensor networks","authors":"Jun Wu,&nbsp;Mingyuan Dai,&nbsp;Xuyang Teng,&nbsp;Chao Wu,&nbsp;Meilin He,&nbsp;Haoyu Liang,&nbsp;Hao Chen,&nbsp;Ye Jin,&nbsp;Weiwei Cao","doi":"10.1002/ett.5015","DOIUrl":"https://doi.org/10.1002/ett.5015","url":null,"abstract":"<p>Cooperative spectrum sensing (CSS) has emerged as a promising strategy for identifying available spectrum resources by leveraging spatially distributed sensors in cognitive wireless sensor networks (CWSNs). Nevertheless, this open collaborative approach is susceptible to security threats posed by malicious sensors, specifically Byzantine attack, which can significantly undermine CSS accuracy. Moreover, in extensive CWSNs, the CSS process imposes substantial communication overhead on the reporting channel, thereby considerably diminishing cooperative efficiency. To tackle these challenges, this article introduces a refined CSS approach, termed weighted sequential detection (WSD). This method incorporates channel state information to validate the global decision made by the fusion center and assess the trust value of sensors. The trust value based weight is assigned to sensing samples, which are then integrated into a sequential detection framework within a defined time window. This sequential approach prioritizes samples based on descending trust values. Numerical simulation results reveal that the proposed WSD outperforms conventional fusion rules in terms of the error probability, sample size, achievable throughput, and latency, even under varying degrees of Byzantine attack. This innovation signifies a substantial advancement in enhancing the reliability and efficiency of CSS.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信