Chaozheng Xue, Tao Li, Yongzhao Li, Y. Ruan, Rui Zhang
{"title":"Radio Frequency Identification for Drones Using Spectrogram and CNN","authors":"Chaozheng Xue, Tao Li, Yongzhao Li, Y. Ruan, Rui Zhang","doi":"10.1109/GLOBECOM48099.2022.10000823","DOIUrl":"https://doi.org/10.1109/GLOBECOM48099.2022.10000823","url":null,"abstract":"Over the past few years, commercial drones have grown in popularity. However, the pervasive use of drones may pose a range of secure risks to sensitive areas such as airports and military bases. Hence, drone detection and identification are critical and necessary for governments and security agencies. This paper proposes a radio frequency identification (RFI) system for drones based on spectrogram and convolutional neural network (CNN). Specifically, spectrogram is used to represent fine-grained time-frequency characteristics of drone signals. Then CNN is designed to infer drone types by identifying their spectrograms. In practice, drones have different operating channels, and any one of them can be selected for signal transmission. It means that the carrier frequencies of their signals are unknown, which may result in misclassifications. To address this problem, we collect drone signals from all potential frequency bands, and demonstrate that carrier frequency offset (CFO) compensation can significantly improve the system performance. Experimental evaluation is performed in real wireless environments involving 6 drones and a Universal Software Radio Peripheral (USRP) X310 platform. Moreover, the proposed spectrogram-based CNN can reach the best performance compared with the IQ-based and FFT-based CNNs. The classification accuracy is beyond 98% for drones operating on arbitrary channels.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115329368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DFSNet: Deep Fractional Scattering Network for LoRa Fingerprinting","authors":"Tiantian Zhang, Pinyi Ren, Dongyang Xu, Zhanyi Ren","doi":"10.1109/GLOBECOM48099.2022.10000729","DOIUrl":"https://doi.org/10.1109/GLOBECOM48099.2022.10000729","url":null,"abstract":"Radio frequency fingerprints (RFF) identification is a critical enabling technology to support rapid and scalable device identification in long rang (LoRa) based Internet of Things (IoT). In recent years, the identification precision of RFF has been significantly improved by leveraging artificial intelligence (AI) technologies to deeply exploit RFF features which are hardware-level, unique and resilient. However, traditional AI technologies lack strong interpretability, require massive amounts of training data and occupy huge computing resources. To address above challenges, we in this paper propose a deep fractional scattering network (DFSNet) to extract the RFF features hidden in non-stationary LoRa chirp signal through linear translation-variant multiscale fractional wavelet filters. Due to the fractional-domain deformation stability in DFSNet, the influence of noise on feature extraction can be reduced to the greatest extent by fractional transformation. Firstly, we apply DFSNet to build a hybrid RFF identification interpretability framework where the scattering coefficients of input can be calculated and characterized. Ben-efiting from the application of fractional wavelet transform, we can clearly explain the features represented by each coefficient. Then, the robustness characteristic of the fractional deformation is analyzed. Finally, experiment results show that our proposed hybrid DFSNet can achieve up to about 98.5% recognition accuracy rate with only about 5000 LoRa practical training samples per device.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115593085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid Beamforming for Ergodic Rate Maximization of mmWave Massive Grant-Free Systems","authors":"Gang Sun, Xinping Yi, Wen Wang, Wei Xu","doi":"10.1109/GLOBECOM48099.2022.10001570","DOIUrl":"https://doi.org/10.1109/GLOBECOM48099.2022.10001570","url":null,"abstract":"To meet the escalating demand on spectral resource in massive machine-type communication (mMTC) applications, a critical solution is applying massive grant-free transmission to the millimeter-wave (mmWave) band. In this paper, to maximize the ergodic rate, we propose an efficient hybrid analog/digital beamforming (HBF) design algorithm for the massive grant-free transmission in uplink mmWave systems. Specifically, to make the HBF design problem tractable, we first leverage the deterministic equivalent method to derive an approximate expression of the ergodic rate for the mMTC in the mmWave system. Since the ergodic rate maximization-based HBF design problem is nonconvex, we leverage the alternating optimization strategy and propose a semidefinite relaxation-based HBF algorithm to improve the ergodic rate. Simulation results verify the superior performance of the proposed HBF design algorithm in improving the ergodic rate.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116122840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection of Service Provider Hardware Over-commitment in Container Orchestration Environments","authors":"Pedro Horchulhack, E. Viegas, A. Santin","doi":"10.1109/GLOBECOM48099.2022.10001375","DOIUrl":"https://doi.org/10.1109/GLOBECOM48099.2022.10001375","url":null,"abstract":"The deployment of container-based services continues to increase as time passes, mainly due to its fast provision time and lower allocation overheads. Yet, the literature still neglects the performance degradation in containers due to multi-tenancy and service provider hardware over-commitment. This paper proposes a new hardware over-commitment detection for container orchestration environments, implemented twofold. First, the containerized hardware usage of deployed containers is continuously monitored in a non-intrusive manner, leveraging the container engine resource management interface. Second, collected features are used by a recurrent neural network model for detecting both container and service level hardware over-commitment, following a time-series rationale. Experiments run on a containerized Apache Spark distribution have shown that multi-tenancy and hardware over-commitment significantly affect its performance. In addition, our proposed model is able to detect hardware over-commitment with up to 91% of true-positive at the container level, and up to 93% true-positive at the service level.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116243647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FTLIoT: A Federated Transfer Learning Framework for Securing IoT","authors":"Yazan Otoum, Sai Krishna Yadlapalli, A. Nayak","doi":"10.1109/GLOBECOM48099.2022.10001461","DOIUrl":"https://doi.org/10.1109/GLOBECOM48099.2022.10001461","url":null,"abstract":"The growing number of Internet of Things (IoT) applications and connected devices has increased the chance for more cyberattacks against those applications and devices and emphasized the need to protect the IoT networks. Due to the vast network and the anonymity of the internet, it has been challenging to preserve private information and communication. Although most systems implement security devices (i.e. firewalls) to avoid this, the second line of defence, Intrusion Detection Systems (IDSs), are critical in enhancing the system's security level. This paper proposed a model that combines the two machine learning techniques, Federated and Transfer Learning, to build an IDS to secure the IoT networks with less training time and enhanced performance while preserving the user's data privacy. Deep learning algorithms, namely Deep Neural Network (DNN) and Convolutional Neural Network (CNN), are used to evaluate the performance of the proposed framework on a benchmark dataset, CSE-CIC-IDS2018, and the feasibility of adopting Federated Transfer Learning (FTL) is shown in terms of performance metrics and training and fine-tuning time. The results show that the proposed technique can increase performance and decrease training time compared to the traditional machine learning techniques.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116442621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guillem Reus Muns, Jinfeng Du, D. Chizhik, R. Valenzuela, K. Chowdhury
{"title":"Machine Learning-based mmWave Path Loss Prediction for Urban/Suburban Macro Sites","authors":"Guillem Reus Muns, Jinfeng Du, D. Chizhik, R. Valenzuela, K. Chowdhury","doi":"10.1109/GLOBECOM48099.2022.10000638","DOIUrl":"https://doi.org/10.1109/GLOBECOM48099.2022.10000638","url":null,"abstract":"Millimeter-Wave (mmWave) has great potential to provide high data dates given its large available bandwidth, but its severe path loss and high propagation sensitivity to different environmental conditions make deployment planning particularly challenging. Traditional slope-intercept models fall short in capturing large site-specific variations due to urban clutter, terrain tilt or foliage, and ray-tracing faces challenges in characterizing mmWave propagation accurately with reasonable complexity. In this work, we apply machine learning (ML) techniques to predict mmWave path loss on a link-to-link basis over an extensive set of 28 GHz field measurements collected in a major city of USA, with over 120,000 links from both urban and suburban scenarios, with over 40 dB variation for links at similar distances. Either raw environmental profile (terrain+clutter) of each link or 8 selected expert features are used to either directly predict path loss via regression-based approaches or predict the best performing option out of a pool of theoretical/empirical propagation models. Our evaluation shows that Lasso regression provides the best path loss prediction with a performance (RMSE 8.1 dB) comparable to the per-site slope-intercept fit (RMSE 8.0 dB), whereas model selection method achieves 8.6 dB RMSE, both are significantly better than the best a posteriori 3GPP model (UMa-NLOS, 10.0 dB).","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116545524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Yan, Tao Jing, Qinghe Gao, Yingzhen Wu, Xiaoxuan Wang
{"title":"Physical Layer Security Enabled Two-Stage AP Selection for Computation Offloading","authors":"Yu Yan, Tao Jing, Qinghe Gao, Yingzhen Wu, Xiaoxuan Wang","doi":"10.1109/GLOBECOM48099.2022.10000975","DOIUrl":"https://doi.org/10.1109/GLOBECOM48099.2022.10000975","url":null,"abstract":"Physical layer security (PLS) has been widely employed in studies of computation offloading under traditional centralized networks. Distinguished from existing studies of only combating passive eavesdropper, we propose an efficient user-centric secure two-stage (UCSTS) access point (AP) selection method to combat simultaneously active and passive eaves-dropping, which is exploiting the distributed feature of cell-free massive multiple-input-multiple-output (MIMO) scenarios. Furthermore, we propose a novel secure computation task offloading (SCTO) model to guarantee the security of both uplink and downlink transmission. Aiming at reducing total energy consumption with high security, a minimum energy consumption optimization problem is solved by alternative optimization (AO) algorithm. Simulation results show that the proposed model can well combat Eve while reducing the total energy consumption, and the security of the proposed selection method is better than the AN-based scheme.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116570965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Reinforcement Learning-Guided Task Reverse Offloading in Vehicular Edge Computing","authors":"Anqi Gu, Huaming Wu, Huijun Tang, Chaogang Tang","doi":"10.1109/GLOBECOM48099.2022.10001474","DOIUrl":"https://doi.org/10.1109/GLOBECOM48099.2022.10001474","url":null,"abstract":"The rapid development of Vehicular Edge Computing (VEC) provides great support for Collaborative Vehicle Infrastructure System (CVIS) and promotes the safety of autonomous driving. In CVIS, crowd-sensing data will be uploaded to the VEC server to fuse the data and generate tasks. However, when there are too many vehicles, it brings huge challenges for VEC to make proper decisions according to the information from vehicles and roadside infrastructure. In this paper, a reverse offloading framework is constructed, which comprehensively considers the relationship balance between task completion delay and the energy consumption of User Vehicle (UV). Furthermore, in order to minimize the overall system consumption, we establish an adaptive optimal reverse offloading strategy based on Deep Q-Network (DQN). Simulation results demonstrate that the proposed algorithm can effectively reduce the energy consumption and task delay, when compared with the full local and fixed offloading schemes.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"12 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122689218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reinforcement Learning assisted Routing for Time Sensitive Networks","authors":"Nurefsan Sertbas Bülbül, Mathias Fischer","doi":"10.1109/GLOBECOM48099.2022.10001630","DOIUrl":"https://doi.org/10.1109/GLOBECOM48099.2022.10001630","url":null,"abstract":"Recent developments in real-time critical systems pave the way for different application scenarios such as Industrial IoT with various quality-of-service (QoS) requirements. The most critical common feature of such applications is that they are sensitive to latency and jitter. Thus, it is desired to perform flow placements strategically considering application requirements due to limited resource availability. In this paper, path computation for time-sensitive networks is investigated while satisfying individual end-to-end delay requirements of critical traffic. The problem is formulated as a mixed-integer linear program (MILP) which is NP-hard with exponentially increasing computational complexity as the network size expands. To solve the MILP with high efficiency, we propose a reinforcement learning (RL) algorithm that learns the best routing policy by continuously interacting with the network environment. The proposed learning algorithm determines the variable action set at each decision-making state and captures different execution times of the actions. The reward function in the proposed algorithm is carefully designed for meeting individual flow deadlines. Simulation results indicate that the proposed reinforcement learning algorithm can produce near-optimal flow allocations (close by ~1.5 %) and scales well even with large topology sizes.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121879147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Framed Projective Cone Scheduling: Latency vs. Context-Switching Tradeoff in Data Centers","authors":"Emi U. Zeger, Ariana J. Mann, N. Bambos","doi":"10.1109/GLOBECOM48099.2022.10000687","DOIUrl":"https://doi.org/10.1109/GLOBECOM48099.2022.10000687","url":null,"abstract":"Queue-processor service and communication switches are often reconfigured in data centers to dynamically reallocate resources based on demand and maximize utilization. However, reconfiguration introduces overhead that will reduce the usable processor bandwidth if done too frequently. We introduce a cost framework to determine how frequently such reconfigurations should occur in order to optimally trade off between the cost of the reconfiguration (or context switching) overhead and the latency cost due to the delayed reconfiguration. A general framing algorithm is introduced to optimize dynamic processor allocation that limits processors to only be reallocated at the beginning of a new frame, but allows a class of functions of the historical backlog to be employed when selecting the new allocation. We show that the system throughput is not affected by framing, however, the job latency increases with the frame's span. The cost model and framed allocation algorithm are investigated to determine how to balance a tolerable increase in job latency for significant reduction of system overhead due to processor reconfiguration.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116877384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}