Haojun Huang;Qifan Wang;Weimin Wu;Miao Wang;Geyong Min
{"title":"Accurate Prediction of Multi-Dimensional Required Resources in 5G via Federated Deep Reinforcement Learning","authors":"Haojun Huang;Qifan Wang;Weimin Wu;Miao Wang;Geyong Min","doi":"10.1109/TMC.2024.3480136","DOIUrl":"https://doi.org/10.1109/TMC.2024.3480136","url":null,"abstract":"The accurate prediction of required resources in terms of storage, computing and bandwidth is essential for 5G to host diverse services. The existing efforts illustrate that it is more promising to efficiently predict the unknown required resources with a third-order tensor compared to the 2D-matrix-based solutions. However, most of them fail to leverage the inherent features hidden in network traffic like temporal stability and service correlation to build a third-order tensor for the multi-dimensional required resource prediction in an intelligent manner, incurring coarse-grained prediction accuracy. Furthermore, it is difficult to build a third-order tensor with rate-varied measurements in 5G due to different lengths of measurement time slots. To address these issues, we propose an Accurate Prediction of Multi-Dimensional Required Resources (APMR) approach in 5G via Federated Deep Reinforcement Learning (FDRL). We first confirm the resource requests originated from different Base Stations (BSs) at varied measurement rates have similar features in service and time domains, but cannot directly form a series of regular tensors. Built on these observations, we reshape these measurement data to form a series of standard third-order tensors with the same size, which include many elements obtained from measurements and some unknown elements needed to be inferred. In order to obtain accurately predicted results, the FDRL-based tensor factorization approach is introduced to intelligently utilize multiple specific iteration rules for local model learning, and the accuracy-aware and latency-based depreciation strategies are exploited to aggregate local models for resource prediction. Extensive simulation experiments demonstrate that APMR can accurately predict the multi-dimensional required resources compared to the state-of-the-art approaches.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1469-1481"},"PeriodicalIF":7.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trajectory Optimization and Pick-Up and Delivery Sequence Design for Cellular-Connected Cargo AAVs","authors":"Jiangling Cao;Liang Yang;Dingcheng Yang;Tiankui Zhang;Lin Xiao;Hongbo Jiang;Dusit Niyato","doi":"10.1109/TMC.2024.3480910","DOIUrl":"https://doi.org/10.1109/TMC.2024.3480910","url":null,"abstract":"In this paper, we consider a cargo autonomous aerial vehicle (AAV)-aided multi-parcel pick-up and delivery network, where the communication ability of the AAV is provided by the ground base stations (GBSs). For such a system setup, our goal is to optimize the trajectory of the cargo AAV while minimizing the combined impact of total energy consumption and total outage time. Simultaneously, we aim to maximize overall user satisfaction throughout the entire flight duration. More specifically, we propose a pick-up and delivery of AAV (PDU) framework to address this problem and this framework consists of two parts. First, a simulated annealing (SA) algorithm is used to obtain the pick-up and delivery (P&D) order of parcels. On the basis of obtaining the P&D order through SA, we further use deep reinforcement learning (DRL) to optimize the flight trajectory of the AAV to ensure the expected communication quality between the AAV and GBSs. To verify the effectiveness of our proposed algorithms, we design three baseline strategies for comparison, and also investigate the effect of using the PDU framework with different weights. Finally, numerical results show that the performance of PDU strategy is improved by about 5%-30% compared with other strategies in solving the performance tradeoff of AAV energy consumption, communication quality, and user satisfaction.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1402-1416"},"PeriodicalIF":7.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FedSN: A Federated Learning Framework Over Heterogeneous LEO Satellite Networks","authors":"Zheng Lin;Zhe Chen;Zihan Fang;Xianhao Chen;Xiong Wang;Yue Gao","doi":"10.1109/TMC.2024.3481275","DOIUrl":"https://doi.org/10.1109/TMC.2024.3481275","url":null,"abstract":"Recently, a large number of Low Earth Orbit (LEO) satellites have been launched and deployed successfully in space. Due to multimodal sensors equipped by the LEO satellites, they serve not only for communications but also for various machine learning applications. However, a ground station (GS) may be incapable of downloading such a large volume of raw sensing data for centralized model training due to the limited contact time with LEO satellites (e.g. 5 minutes). Therefore, <italic>federated learning</i> (FL) has emerged as the promising solution to address this problem via on-device training. Unfortunately, enabling FL on LEO satellites still face three critical challenges: i) heterogeneous computing and memory capabilities, ii) limited downlink/uplink rate, and iii) model staleness. To this end, we propose <bold>FedSN</b> as a general FL framework to tackle the above challenges. Specifically, we first present a novel sub-structure scheme to enable heterogeneous local model training considering different computing, memory, and communication constraints on LEO satellites. Additionally, we propose a pseudo-synchronous model aggregation strategy to dynamically schedule model aggregation for compensating model staleness. Extensive experiments with real-world satellite data demonstrate that FedSN framework achieves higher accuracy, lower computing, and communication overhead than the state-of-the-art benchmarks.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1293-1307"},"PeriodicalIF":7.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jakub Žádník;Michel Kieffer;Anthony Trioux;Markku Mäkitalo;Pekka Jääskeläinen
{"title":"CV-Cast: Computer Vision–Oriented Linear Coding and Transmission","authors":"Jakub Žádník;Michel Kieffer;Anthony Trioux;Markku Mäkitalo;Pekka Jääskeläinen","doi":"10.1109/TMC.2024.3478048","DOIUrl":"https://doi.org/10.1109/TMC.2024.3478048","url":null,"abstract":"Remote inference allows lightweight edge devices, such as autonomous drones, to perform vision tasks exceeding their computational, energy, or processing delay budget. In such applications, reliable transmission of information is challenging due to high variations of channel quality. Traditional approaches involving spatio-temporal transforms, quantization, and entropy coding followed by digital transmission may be affected by a sudden decrease in quality (the \u0000<italic>digital cliff</i>\u0000) when the channel quality is less than expected during design. This problem can be addressed by using Linear Coding and Transmission (LCT), a joint source and channel coding scheme relying on linear operators only, allowing to achieve reconstructed per-pixel error commensurate with the wireless channel quality. In this paper, we propose CV-Cast: the first LCT scheme optimized for computer vision task accuracy instead of per-pixel distortion. Using this approach, for instance at 10 dB channel signal-to-noise ratio, CV-Cast requires transmitting 28% less symbols than a baseline LCT scheme in semantic segmentation and 15% in object detection tasks. Simulations involving a realistic 5G channel model confirm the smooth decrease in accuracy achieved with CV-Cast, while images encoded by JPEG or learned image coding (LIC) and transmitted using classical schemes at low Eb/N0 are subject to digital cliff.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 2","pages":"1149-1162"},"PeriodicalIF":7.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10719663","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proactive Obsolete Packet Management Based Analysis of Age of Information for LCFS Heterogeneous Queueing System","authors":"Y. Arun Kumar Reddy;T. G Venkatesh","doi":"10.1109/TMC.2024.3481062","DOIUrl":"https://doi.org/10.1109/TMC.2024.3481062","url":null,"abstract":"This paper analyzes the Age of Information (AoI), focusing on transmitting status updates from source to destination. We analyze the Age of Information in a system comprised of two heterogeneous servers with exponential distribution parameters <inline-formula><tex-math>$mu _{1}$</tex-math></inline-formula> and <inline-formula><tex-math>$mu _{2}$</tex-math></inline-formula>, respectively. Our study adopts the stochastic hybrid systems (SHS) methodology to thoroughly assess the system’s performance. We explore various queueing disciplines, including Last-Come-First-Serve (LCFS) with work-conservative and LCFS with probabilistic routing, to accurately quantify AoI and Peak AoI (PAoI) metrics. We have used the Proactive Obsolete Packet Management (POPMAN) approach to identify and discard obsolete packets proactively, thus enhancing server processing efficiency and ensuring orderly packet reception. We also investigate the following parameters, such as the probability of preemption of packets, the probability of packets getting obsolete, the probability of informative packets, and optimal splitting probabilities. Results show an improvement in both AoI and PAoI within the LCFS with work-conservative queueing system with the integration of the POPMAN method. Furthermore, LCFS with probabilistic routing using the POPMAN approach performs similarly to conventional methods. In all the queueing systems studied, as the arrival rate <inline-formula><tex-math>$lambda to infty$</tex-math></inline-formula>, the average AoI and PAoI approach <inline-formula><tex-math>$1/(mu _{1}+mu _{2})$</tex-math></inline-formula>. For c servers, they approach <inline-formula><tex-math>$1/(mu _{1}+mu _{2}+cdots +mu _{c})$</tex-math></inline-formula>.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1513-1529"},"PeriodicalIF":7.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meiyi Yang;Deyun Gao;Weiting Zhang;Dong Yang;Dusit Niyato;Hongke Zhang;Victor C. M. Leung
{"title":"Deep Reinforcement Learning-Based Joint Caching and Routing in AI-Driven Networks","authors":"Meiyi Yang;Deyun Gao;Weiting Zhang;Dong Yang;Dusit Niyato;Hongke Zhang;Victor C. M. Leung","doi":"10.1109/TMC.2024.3481276","DOIUrl":"https://doi.org/10.1109/TMC.2024.3481276","url":null,"abstract":"To reduce redundant traffic transmission in both wired and wireless networks, optimal content placement problem naturally occurring in many applications is studied. In this paper, considering the limited cache capacity, unknown popularity distribution and non-stationary user demands, we address this problem by jointly optimizing content caching and routing with the objective of minimizing transmission cost. By optimizing the routing with the <italic>route-to-least cost-cache</i> policy, the content caching process is modeled as a Markov decision process (MDP), aiming to maximize caching reward. However, the optimization problem consists of multiple nodes selecting caching contents, which leads to the combinatorial increase of the number of action dimensions with the number of possible actions. To handle this curse of dimensionality, we propose an intelligent caching algorithm by embedding action branching architecture into a dueling double deep Q-network (D3QN) to optimize caching decisions, and thus the agent at the controller can adaptively learn and track the underlying dynamics. Considering the independence of each branch, a marginal gain-based replacement rule is proposed to satisfy cache capacity constraint. Our simulation results show that compared with the prior art, the caching reward and hit rate of the proposed algorithm are increased by 35.3% and 33.6% respectively on average.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1322-1337"},"PeriodicalIF":7.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bo Xie;Haixia Cui;Ivan Wang-Hei Ho;Yejun He;Mohsen Guizani
{"title":"Computation Offloading and Resource Allocation in LEO Satellite-Terrestrial Integrated Networks With System State Delay","authors":"Bo Xie;Haixia Cui;Ivan Wang-Hei Ho;Yejun He;Mohsen Guizani","doi":"10.1109/TMC.2024.3479243","DOIUrl":"https://doi.org/10.1109/TMC.2024.3479243","url":null,"abstract":"Computing offloading optimization for energy saving is becoming increasingly important in low-Earth orbit (LEO) satellite-terrestrial integrated networks (STINs) since battery techniques have not kept up with the demand of ground terminal devices. In this paper, we design a delay-based deep reinforcement learning (DRL) framework specifically for computation offloading decisions, which can effectively reduce the energy consumption. Additionally, we develop a multi-level feedback queue for computing allocation (RAMLFQ), which can effectively enhance the CPU’s efficiency in task scheduling. We initially formulate the computation offloading problem with the system delay as Delay Markov Decision Processes (DMDPs), and then transform them into the equivalent standard Markov Decision Processes (MDPs). To solve the optimization problem effectively, we employ a double deep Q-network (DDQN) method, enhancing it with an augmented state space to better handle the unique challenges posed by system delays. Simulation results demonstrate that the proposed learning-based computing offloading algorithm achieves high levels of performance efficiency and attains a lower total cost compared to other existing offloading methods.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1372-1385"},"PeriodicalIF":7.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ui-Ear: On-Face Gesture Recognition Through On-Ear Vibration Sensing","authors":"Guangrong Zhao;Yiran Shen;Feng Li;Lei Liu;Lizhen Cui;Hongkai Wen","doi":"10.1109/TMC.2024.3480216","DOIUrl":"https://doi.org/10.1109/TMC.2024.3480216","url":null,"abstract":"With the convenient design and prolific functionalities, wireless earbuds are fast penetrating in our daily life and taking over the place of traditional wired earphones. The sensing capabilities of wireless earbuds have attracted great interests of researchers on exploring them as a new interface for human-computer interactions. However, due to its extremely compact size, the interaction on the body of the earbuds is limited and not convenient. In this paper, we propose <italic>Ui-Ear</i>, a new on-face gesture recognition system to enrich interaction maneuvers for wireless earbuds. <italic>Ui-Ear</i> exploits the sensing capability of Inertial Measurement Units (IMUs) to extend the interaction to the skin of the face near ears. The accelerometer and gyroscope in IMUs perceive dynamic vibration signals induced by on-face touching and moving, which brings rich maneuverability. Since IMUs are provided on most of the budget and high-end wireless earbuds, we believe that <italic>Ui-Ear</i> has great potential to be adopted pervasively. To demonstrate the feasibility of the system, we define seven different on-face gestures and design an end-to-end learning approach based on Convolutional Neural Networks (CNNs) for classifying different gestures. To further improve the generalization capability of the system, adversarial learning mechanism is incorporated in the offline training process to suppress the user-specific features while enhancing gesture-related features. We recruit 20 participants and collect a realworld datasets in a common office environment to evaluate the recognition accuracy. The extensive evaluations show that the average recognition accuracy of <italic>Ui-Ear</i> is over 95% and 82.3% in the user-dependent and user-independent tasks, respectively. Moreover, we also show that the pre-trained model (learned from user-independent task) can be fine-tuned with only few training samples of the target user to achieve relatively high recognition accuracy (up to 95%). At last, we implement the personalization and recognition components of <italic>Ui-Ear</i> on an off-the-shelf Android smartphone to evaluate its system overhead. The results demonstrate <italic>Ui-Ear</i> can achieve real-time response while only brings trivial energy consumption on smartphones.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1482-1495"},"PeriodicalIF":7.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Physical Layer Cross-Technology Communication via Explainable Neural Networks","authors":"Haoyu Wang;Jiazhao Wang;Wenchao Jiang;Shuai Wang;Demin Gao","doi":"10.1109/TMC.2024.3480109","DOIUrl":"https://doi.org/10.1109/TMC.2024.3480109","url":null,"abstract":"Cross-technology communication (CTC) facilitates seamless interaction between different wireless technologies. Most existing methods use reverse engineering to derive the required transmission payload, generating a waveform that the target device can successfully demodulate. However, traditional approaches have certain limitations, including reliance on specific reverse engineering algorithms or the need for manual parameter tuning to reduce emulation distortion. In this work, we present NNCTC, a framework for achieving physical layer cross-technology communication through explainable neural networks, incorporating relevant knowledge from the wireless communication physical layer into the neural network models. We first convert the various signal processing components within the CTC process into neural network models, then build a training framework for the CTC encoder-decoder structure to achieve CTC. NNCTC significantly reduces the complexity of CTC by automatically deriving CTC payloads through training. We demonstrate how NNCTC implements CTC in WiFi systems using OFDM and CCK modulation. On WiFi systems using OFDM modulation, NNCTC outperforms the WEBee and WIDE designs in terms of error performance, achieving an average packet reception ratio (PRR) of 92.3% and an average symbol error rate (SER) as low as 1.3%. In WiFi systems using OFDM modulation, the highest PRR can reach up to 99%.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1550-1566"},"PeriodicalIF":7.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaixin Chen;Lei Wang;Yongzhi Huang;Kaishun Wu;Lu Wang
{"title":"Optical Sensing-Based Intelligent Toothbrushing Monitoring System","authors":"Kaixin Chen;Lei Wang;Yongzhi Huang;Kaishun Wu;Lu Wang","doi":"10.1109/TMC.2024.3479455","DOIUrl":"https://doi.org/10.1109/TMC.2024.3479455","url":null,"abstract":"Incorrect brushing methods normally lead to poor oral hygiene, and result in severe oral diseases and complications. While effective brushing can address this issue, individuals often struggle with incorrect brushing, like aggressive brushing, insufficient brushing, and missing brushing. To break this stalemate, in this paper, we proposed LiT, a toothbrushing monitoring system to assess the brushing status on 16 surfaces using the Bass technique. LiT utilizes commercial LED toothbrushes’ blue LEDs as transmitters, and incorporates only two low-cost photodetectors as receivers on the toothbrush head. It is challenging to determine optimal deployment positions and minimize photodetectors number to establish the light transmission channel in oral cavity. To address these challenges, we established mathematical models within the oral cavity based on the two photodetectors’ deployment to theoretically validate the feasibility and prove robustness. Furthermore, we designed a comprehensive framework to fight against the implementation challenges including brushing action separation, light interference on the outer surfaces of front teeth, toothpaste diversity, user variations, brushing hand variability, and incorrect brushings. Experimental results demonstrate that LiT achieves a highly accurate surface recognition rate of 95.3%, an estimated error for brushing duration of 6.1%, and incorrect brushing detection accuracy of 96.9%. Furthermore, LiT retains stable capability under a variety of circumstances, such as various lighting conditions, user movement, toothpaste diversity, and left and right-handed users.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1417-1436"},"PeriodicalIF":7.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}