{"title":"Secure and efficient device-to-device communication in IoT: The DMBSOA-enhanced MQTT protocol","authors":"Bharathi Shantha Loganathan, Sathya Priya Jaganathan","doi":"10.1002/ett.5024","DOIUrl":"10.1002/ett.5024","url":null,"abstract":"<p>The Internet of Things (IoT) plays a crucial role in enhancing technology by facilitating data transfer, storage, control, and management across networks. Secure communication within IoT environments remains a significant challenge. This research aims to enhance IoT security by integrating Message Queuing Telemetry Transport (MQTT) and MQTT-Sensor Network (MQTT-SN) protocols with the Dual Mutation-Based Seagull Optimization Algorithm (DMBSOA). MQTT, known for its lightweight and dependable messaging, is widely used in the IoT community but is vulnerable to cyber-attacks, particularly concerning privacy and authentication. DMBSOA, inspired by seagulls' foraging behavior, optimizes MQTT settings to enhance security, reliability, and performance. The proposed model dynamically adjusts key parameters such as transmission frequency, Quality of service (QoS) levels, and message size to improve energy consumption, throughput, end-to-end delay, and packet delivery ratio. A comprehensive system model is presented, comprising publisher, subscriber, and broker nodes, with security mechanisms integrated into the broker to ensure data integrity, authentication, and encryption. MQTT operates over Transmission Control Protocol/Internet Protocol (TCP/IP), while MQTT-SN uses User Datagram Protocol (UDP), catering to resource-constrained devices and low-power modes. The proposed technique attained throughput (68 kB), energy consumption (120 mJ), security level (96%), energy efficiency (98.80%), path loss (22 dB), end-to-end delay (35 ms), processing time (230 s) and packet delivery ratio (0.98). The DMBSOA-optimized MQTT protocol demonstrates superior performance compared to these models, highlighting its potential to meet the evolving demands of IoT security. This research underscores the effectiveness of DMBSOA in enhancing MQTT protocol security and efficiency, providing a promising solution for secure IoT communication.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 8","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141867166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Wu, Mingkun Su, Jianrong Bao, Lei Qiao, Xiaorong Xu, Hao Wang, Gefei Zhu, Weiwei Cao
{"title":"Sequential detection with feedback information for two-way cooperative spectrum sensing in cognitive internet of things","authors":"Jun Wu, Mingkun Su, Jianrong Bao, Lei Qiao, Xiaorong Xu, Hao Wang, Gefei Zhu, Weiwei Cao","doi":"10.1002/ett.5023","DOIUrl":"10.1002/ett.5023","url":null,"abstract":"<p>With the rapid growth of internet of thing (IoT) devices, cooperative spectrum sensing (CSS) has emerged as a promising solution to leverage the spatial diversity of multiple secondary IoT sensing nodes (SNs) for spectrum availability. However, the cooperative paradigm also incurs increased cooperative costs between each SN and the fusion center (FC), leading to decreased cooperative efficiency and achievable throughput, especially in large-scale cognitive IoT (CIoT). To address these challenges, we present a sequential detection with feedback information (SD-FI) approach in this paper. To achieve this objective, we propose a two-way CSS model that formulates an optimization problem of Bayes cost in a quickest detection framework with feedback. To solve this optimization problem, we derive the structure of the optimal local decision rule from the local decision function and determine the optimal detection threshold in conjunction with the cost function. Following the optimal threshold pair, we implement the optimal SD-FI and theoretically demonstrate the uniqueness of the optimal threshold and optimal sensing time. Simulation results demonstrate superiority of SD-FI in terms of cooperative performance (i.e., detection performance and Bayes cost) and sample size. Notably, even with limited sensing time, our proposed SD-FI exhibits high throughput, highlighting its effectiveness in enhancing spectrum availability and utilization in CIoT.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 8","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141783406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interference management of cognitive IoT based on interference steering and alignment","authors":"Zhen-xing Sun, Jin-bin Qian, Miao Liu, Chun-ping Nan, Guo-hui Sha","doi":"10.1002/ett.5013","DOIUrl":"https://doi.org/10.1002/ett.5013","url":null,"abstract":"<p>In the cognitive IoT spectrum sharing process, the complex interference environment leads to a low spectral efficiency of the licensed spectrum. To solve this problem, this article allows more cognitive users (CU) to access the licensed spectrum while ensuring reliable transmission of primary users (PU) by introducing the advanced interference steering (IS) technique. Traditional IS methods are only suitable for single-user and single-data-stream scenarios. Meanwhile, these methods have many drawbacks in multi-user and multi-data-stream scenarios. To overcome these drawbacks, two IS algorithms based on time division (TD) multiple access are proposed in this article, which are TD based equivalent interference sub-channel interference steering (TD-EI-SCIS) and TD-based sub-channel interference steering (TD-SCIS), respectively. Furthermore, considering the fact that PU has the highest communication quality priority in cognitive IoT, a joint interference management (JIM) scheme based on TD-EI-SCIS, TD-SCIS, and partial interference alignment (PIA) is proposed to protect the PU. Simulation results show that the spectral efficiency (SE) of the PU in the proposed JIM scheme is not significantly improved compared with the proposed TD-EI-SCIS in the low SNR region. However, from the perspective of the total SE in the whole cognitive IoT system, TD-EI-SCIS is superior to the JIM about 3 to 4 bit/s/Hz on average. In the high SNR region, the JIM scheme significantly outperforms TD-EI-SCIS not only in terms of the SE of PU about 10 bit/s/Hz but also in the aspect of the SE in the total network about 7 bit/s/Hz on average.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 8","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141730322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic service function chain placement in mobile computing: An asynchronous advantage actor-critic based approach","authors":"Heling Jiang, Hai Xia, Mansoureh Zare","doi":"10.1002/ett.5022","DOIUrl":"https://doi.org/10.1002/ett.5022","url":null,"abstract":"<p>Internet of Things (IoT) devices are constantly sending data to the cloud. The resource-rich cloud computing paradigm provides users with significant potential to reduce costs and improve quality of service (QoS). However, the centralized architecture of cloud data centers and thousands of miles away from clients has reduced the efficiency of this paradigm in delay-sensitive and real-time applications. In order to get over these restrictions, fog computing was integrated into cloud computing as a new paradigm. Without using the cloud, fog computing can supply the resources needed for IoT devices at the network's edge. Delay is thereby decreased because processing, analysis, and storage are located closer to the clients and the areas where the data is created. In Mobile Edge Computing (MEC) networks, this study sets up an architecture based on Deep Reinforcement Learning (DRL) to deliver online services to end users. We introduce a DRL-based method named DPPR for <span>D</span>ynamic service function chain (SFC) <span>P</span>lacement that uses <span>P</span>arallelized virtual network functions (VNFs) and seeks to optimize the long-term expected cumulative <span>R</span>eward. Online service provider DPPR can accomplish processing acceleration through parallel VNF sharing. In addition, by extracting the distribution of initialized VNFs, DPPR improves the capacity to handle subsequent requests. The conducted simulations demonstrate the efficacy of the proposed method, so that the average number of accepted requests is improved by about 11.7%.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 8","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laith Abualigah, Ahmad MohdAziz Hussein, Mohammad H. Almomani, Raed Abu Zitar, Mohammad Sh. Daoud, Hazem Migdady, Ahmed Ibrahim Alzahrani, Ayed Alwadain
{"title":"GIJA:Enhanced geyser-inspired Jaya algorithm for task scheduling optimization in cloud computing","authors":"Laith Abualigah, Ahmad MohdAziz Hussein, Mohammad H. Almomani, Raed Abu Zitar, Mohammad Sh. Daoud, Hazem Migdady, Ahmed Ibrahim Alzahrani, Ayed Alwadain","doi":"10.1002/ett.5019","DOIUrl":"https://doi.org/10.1002/ett.5019","url":null,"abstract":"<p>Task scheduling optimization plays a pivotal role in enhancing the efficiency and performance of cloud computing systems. In this article, we introduce GIJA (Geyser-inspired Jaya Algorithm), a novel optimization approach tailored for task scheduling in cloud computing environments. GIJA integrates the principles of the Geyser-inspired algorithm with the Jaya algorithm, augmented by a Levy Flight mechanism, to address the complexities of task scheduling optimization. The motivation for this research stems from the increasing demand for efficient resource utilization and task management in cloud computing, driven by the proliferation of Internet of Things (IoT) devices and the growing reliance on cloud-based services. Traditional task scheduling algorithms often face challenges in handling dynamic workloads, heterogeneous resources, and varying performance objectives, necessitating innovative optimization techniques. GIJA leverages the eruptive dynamics of geysers, inspired by nature's efficiency in channeling resources, to guide task scheduling decisions. By combining this Geyser-inspired approach with the simplicity and effectiveness of the Jaya algorithm, GIJA offers a robust optimization framework capable of adapting to diverse cloud computing environments. Additionally, the integration of the Levy Flight mechanism introduces stochasticity into the optimization process, enabling the exploration of solution spaces and accelerating convergence. To evaluate the efficacy of GIJA, extensive experiments are conducted using synthetic and real-world datasets representative of cloud computing workloads. Comparative analyses against existing task scheduling algorithms, including AOA, RSA, DMOA, PDOA, LPO, SCO, GIA, and GIAA, demonstrate the superior performance of GIJA in terms of solution quality, convergence rate, diversity, and robustness. The findings of GIJA provide a promising solution quality for addressing the complexities of task scheduling in cloud environments (95%), with implications for enhancing system performance, scalability, and resource utilization.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141584086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of novel intrusion detection in Internet of Things using improved dart game optimizer-derived optimal cascaded ensemble learning","authors":"A. Shali, Dr. A. Chinnasamy, P. Selvakumari","doi":"10.1002/ett.5018","DOIUrl":"https://doi.org/10.1002/ett.5018","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background of the Study</h3>\u0000 \u0000 <p>Internet of things (IoT) industry has accelerated its development with the support of advanced information technology and economic expansion. A complete industrial foundation includes software, chips, electronic components, IoT services, integrated systems, machinery, and telecom operators, which the gradual improvement in the IoT industry system has formulated. As the exponential growth of IoT devices increases, the attack surface available to cybercriminals enables them to carry out potentially more damaging operations. As a result, the security sector has witnessed a rise in cyberattacks. Hackers use several methods to copy and modify the information in the IoT environment. Machine learning techniques are used by the intrusion detection (ID) model to determine and categorize attacks in IoT networks.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Objectives</h3>\u0000 \u0000 <p>Thus, this study explores the ID system with the heuristic-assisted deep learning approaches for effectively detect the attacks in the IoT. At first, the IoT data are garnered in benchmark resources. Then, the gathered data is preprocessed to perform data cleaning. Next, the data is transformed and fed to the feature extraction stage. The feature extraction is performed with the help of one-dimensional convolutional neural network (1D-CNN), where the features are extracted from the target-based pooling layer. Then, these attained deep features are fed to the ID phase, where the cascaded ensemble learning (CEL) approach is adopted for detecting the intrusions. Here, the hyperparameter tuning is done with a new suggested improved darts game optimizer (IDGO) algorithm. Here, the main objective of the developed algorithm helps to maximize accuracy in ID.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Findings</h3>\u0000 \u0000 <p>Throughout the experimental findings, the developed model provides 86% of accuracy. Thus, the finding of the developed model shows less detecting time and higher detection efficiency.</p>\u0000 </section>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141584085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muthukrishnan Anuradha, John Jean Justus, Kaliyaperumal Vijayalakshmi, JK Periasamy
{"title":"HRF-ExGB: Hybrid random forest-extreme gradient boosting for mobile edge computing","authors":"Muthukrishnan Anuradha, John Jean Justus, Kaliyaperumal Vijayalakshmi, JK Periasamy","doi":"10.1002/ett.5010","DOIUrl":"https://doi.org/10.1002/ett.5010","url":null,"abstract":"<p>The development of increasingly cutting-edge mobile apps like augmented reality, facial recognition, and natural language processing has been facilitated by the sharp rise in smartphone demand. The increased use of mobile devices like wireless sensors and wearable technology has led to a rapid increase in mobile applications. Due to the explosive growth of the Internet and distributed computing resources of edge devices in mobile edge computing (MEC), it is necessary to have a suitable controller to ensure effective utilization of distributed computing resources. However, the existing approaches can lead to more computation time, more consumption of energy, and a lack of security issues. To overcome these issues, this paper proposed a novel approach called Hybrid Random Forest-Extreme Gradient Boosting (HRF-XGBoost) to enhance the computation offloading and joint resource allocation predictions. In a wireless-powered multiuser MEC system, the starling murmuration optimization model is utilized to figure out the ideal task offloading options. XGBoost is combined with a random forest classifier to form an HRF-XGBoost architecture which is used to speed up the process while preserving the user's device's battery. An offloading method is created employing certain processes once the best computation offloading decision for Mobile Users (MUs) has been established. The experiment result shows that the method reduced system overhead and time complexity using the strategy of selecting fewer tasks alone by optimally eliminating other tasks. It optimizes the execution time even when the mobile user increases. The performance of the overall system can be greatly improved by our model compared to other existing techniques.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141556641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantum-safe Lattice-based mutual authentication and key-exchange scheme for the smart grid","authors":"Hema Shekhawat, Daya Sagar Gupta","doi":"10.1002/ett.5017","DOIUrl":"10.1002/ett.5017","url":null,"abstract":"<p>The smart grid network (SGN) is expected to leverage advances in the Internet of Things (IoT) to enable effective delivery and monitoring of energy. By integrating communication, computing, and information tools like smart sensors and meters to facilitate the process of monitoring, predictions, and management of power usage, the SGN can improve competence of power-grid architecture. However, the effective deployment of IoT-powered SGNs hinges on the deployment of strong security protocols. With the advent of quantum computers, classic cryptographic algorithms based on integer factorization and the Diffie-Hellman assumptions may not be suitable to secure the sensitive data of SGNs. Therefore, in this paper, a secure quantum-safe mutual authentication and key-exchange (MAKe) mechanism is proposed for SGNs, that make use of the hard assumptions of small integer solution and inhomogeneous small integer solution problems of lattice. The proposed protocol is intended to offer confidentiality, anonymity, and hashed-based mutual authentication with a key-exchange agreement. Similarly, this scheme allows creation and validation of the mutual trust among the smart-meters (SMs) and neighbourhood-area network gateway over an insecure wireless channel. A random oracle model is then used to perform the formal security analysis of the proposed approach. A thorough formal analysis demonstrates proposed algorithm's ability to withstand various known attacks. The performance analysis shows that the proposed approach outperforms other comparative schemes with respect to at least 22.07% of minimal energy utilization, 51.48% effective storage and communications costs, as well as 76.28% computational costs, and thus suitable for resource-constrained SGNs.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-graph representation spatio-temporal attention networks for traffic forecasting in the cinematic metaverse","authors":"Ke Li, Xiaoming He, Yinqiu Liu, Meng Chen","doi":"10.1002/ett.5020","DOIUrl":"10.1002/ett.5020","url":null,"abstract":"<p>The cinematic metaverse aims to create a virtual space with the context of a film. Users can enter this space in the form of avatars, experiencing the cinematic plot firsthand in an immersive manner. This requires us to design a rational computation resource allocation and synchronization algorithm to meet the demands of multi-objective joint optimization, such as low latency and high throughput, which ensures that users can seamlessly switch between virtual and real worlds and acquire immersive experiences. Unfortunately, the explosive growth in the number of users makes it difficult to jointly optimize multiple objectives. Predicting traffic generated by the users' avatars in the cinematic metaverse is significant for the optimization process. Although graph neural networks-based traffic prediction models achieve superior prediction accuracy, these methods rely only on physical distances-based topological graph information, while failing to comprehensively reflect the real relationships between avatars in the cinematic metaverse. To address this issue, we present a novel Multi-Graph Representation Spatio-Temporal Attention Networks (MGRSTANet) for traffic prediction in the cinematic metaverse. Specifically, based on multiple topological graph information (e.g., physical distances, centerity, and similarity), we first design Multi-Graph Embedding (MGE) module to generate multiple graph representations, thus reflecting on the real relationships between avatars more comprehensively. The Spatio-Temporal Attention (STAtt) module is then proposed to extract spatio-temporal correlations in each graph representations, thus improving prediction accuracy. We conduct simulation experiments to evaluate the effectiveness of MGRSTANet. The experimental results demonstrate that our proposed model outperforms the state-of-the-art baselines in terms of prediction accuracy, making it appropriate for traffic forecasting in the cinematic metaverse.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hardware software SoC co-design analysis and implementation of MIMO-OFDM for 4G/5G/6G eNodeB applications","authors":"Sanket N. Dessai, Hemant Patidar","doi":"10.1002/ett.5012","DOIUrl":"https://doi.org/10.1002/ett.5012","url":null,"abstract":"<p>With the emerging challenges for the data rate requirements of 5G/6G applications and reusing the 4G infrastructure for 5G, it is necessary to understand the System-on-Chip (SoC) platform-based embedded co-design and implementation of the programmable and reconfigurable MIMO-OFDM system. For both uplink and downlink data transmissions, these applications require a larger data throughput as well as reduced bit error rates, latency, and increased spectral efficiency. This work describes the co-design and development of hardware and software for the MIMO-OFDM algorithms for 5G and 6G eNodeBs. An efficient design through computer architecture based on pipeline and parallelization using systolic and CORDIC has been applied for the IP development of the sub-components of the MIMO-OFDM systems. A Zynq platform with computing resources including PS, Mali GPU-400, and PL is utilized to increase the data rate for MIMO-OFDM system architecture co-design and implementation. The architecture approach used in this work enabled a data rate of 10–50 Gbps and beyond reaching Tbps based on the system's programmability and reconfigurability with an efficient SoC platform design. The design platform provides a programming feature such as MIMO-OFDM, OFDM, and MIMO without OFDM through software programming for the range of applications of the desired data rates. With 64-QAM modulation, the three channels' observed performance in the predicted multipath channel velocity of 15 km/h for pedestrians, vehicles, and AWGN is seen in simulation. To reach the application clock frequencies, the device's PLL (ZUI7EG) upscales and downscales clock frequencies from 750 to 1600 MHz using a configurable register. When the system is configured to operate as MIMO-OFDM or OFDM in order to get an execution throughput of 300 msec and a data throughput ranging from 71 Gbps to 1749 Gbps using 2 × 2/4 × 4 configurations. The device scalability depends on at present devices of advanced embedded reconfigurable architecture platform. Massive MIMO and multi-user MIMO will be used in the future to increase throughput and data rates. Additionally, future work will focus on creating a MIMO-OFDM hardware-software embedded architecture and testbed to enhance implementation and verification of the vehicle and pedestrian.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 7","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}