{"title":"HT-FL: Hybrid Training Federated Learning for Heterogeneous Edge-Based IoT Networks","authors":"Yixun Gu;Jie Wang;Shengjie Zhao","doi":"10.1109/TMC.2024.3502686","DOIUrl":"https://doi.org/10.1109/TMC.2024.3502686","url":null,"abstract":"With the continuous rolling-out of edge computing, Federated Learning (FL) has become a promising solution for intelligent Internet-of-things (IoT). In addition to resource constraints, deploying FL schemes in IoT networks is greatly challenged by <i>heterogeneity</i> in multiple dimensions. While heterogeneity in data distribution and computation capability has been extensively studied, the impact of distinct, even hybrid training paradigms on FL performances remains largely unknown. To answer this open question in the IoT context, we propose a <i>Hybrid-Training Federated Learning</i> (HT-FL) algorithm for the power-constrained IoT networks, incorporating both sequential and parallel training that naturally adapts to various sub-network topologies, while greatly reducing the energy consumption during the training stage. We demonstrate through analysis that the convergence of HT-FL is theoretically guaranteed, achieving <inline-formula><tex-math>$O (frac{1}{sqrt{K}})$</tex-math></inline-formula> for carefully chosen learning rates. Experiments on multiple datasets show that, the proposed HT-FL outperforms existing FL schemes on multiple training tasks under various data distribution settings, while reducing an average of 20% energy consumption. In a more practical sense, a self-adaptive parameter-tuning strategy is also designed for HT-FL deployment, which can be easily extended to other multi-layer FL schemes in complex application scenarios.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2817-2831"},"PeriodicalIF":7.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chuang Zhang;Geng Sun;Jiahui Li;Qingqing Wu;Jiacheng Wang;Dusit Niyato;Yuanwei Liu
{"title":"Multi-Objective Aerial Collaborative Secure Communication Optimization via Generative Diffusion Model-Enabled Deep Reinforcement Learning","authors":"Chuang Zhang;Geng Sun;Jiahui Li;Qingqing Wu;Jiacheng Wang;Dusit Niyato;Yuanwei Liu","doi":"10.1109/TMC.2024.3502685","DOIUrl":"https://doi.org/10.1109/TMC.2024.3502685","url":null,"abstract":"Due to flexibility and low-cost, unmanned aerial vehicles (UAVs) are increasingly crucial for enhancing coverage and functionality of wireless networks. However, incorporating UAVs into next-generation wireless communication systems poses significant challenges, particularly in sustaining high-rate and long-range secure communications against eavesdropping attacks. In this work, we consider a UAV swarm-enabled secure surveillance network system, where a UAV swarm forms a virtual antenna array to transmit sensitive surveillance data to a remote base station (RBS) via collaborative beamforming (CB) so as to resist mobile eavesdroppers. Specifically, we formulate an aerial secure communication and energy efficiency multi-objective optimization problem (ASCEE-MOP) to maximize the secrecy rate of the system and to minimize the flight energy consumption of the UAV swarm. To address the non-convex, NP-hard and dynamic ASCEE-MOP, we propose a generative diffusion model-enabled twin delayed deep deterministic policy gradient (GDMTD3) method. Specifically, GDMTD3 leverages an innovative application of diffusion models to determine optimal excitation current weights and position decisions of UAVs. The diffusion models can better capture the complex dynamics and the trade-off of the ASCEE-MOP, thereby yielding promising solutions. Simulation results highlight the superior performance of the proposed approach compared with traditional deployment strategies and some other deep reinforcement learning (DRL) benchmarks. Moreover, performance analysis under various parameter settings of GDMTD3 and different numbers of UAVs verifies the robustness of the proposed approach.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3041-3058"},"PeriodicalIF":7.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TapWristband: A Wearable Keypad System Based on Wrist Vibration Sensing","authors":"Jialiang Yan;Siyao Cheng;Yang Zhao;Jie Liu","doi":"10.1109/TMC.2024.3503417","DOIUrl":"https://doi.org/10.1109/TMC.2024.3503417","url":null,"abstract":"Fine-grained human motion detection has become increasingly important with the growing popularity of human computer interaction (HCI). However, traditional gesture-based HCI systems often require the design of new operation modes rather than conforming to user habits, thus increasing system learning costs. In this paper, we present TapWristband, a novel wearable sensor-based vibration sensing system that detects finger tapping by measuring wrist vibrations. We first perform real-world experiments to collect measurements for modeling the effects of the tapping motion on wearable wristband sensors including piezoelectric transducer (PZT) and inertial measurement unit (IMU). We find that a damped vibration model can be used to represent the relaxing phase of a vibration response due to tapping motion. Thus, we propose a mutual cross-correlation-based event segmentation algorithm to extract the vibration signal during the relaxing phase. After that, we develop feature extraction and classification algorithms to recognize the tapping patterns of five fingers across twelve key locations of a keypad system. Finally, we performed extensive experiments with thirteen participants to evaluate our system. Experimental results show that our low-cost vibration sensing system can achieve an average accuracy of over 93% with a tapping speed of over 100 taps per minute in real-world tapping scenarios.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2949-2966"},"PeriodicalIF":7.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TaPIN: Reinforcing PIN Authentication on Smartphones With Tap Biometrics","authors":"Junhyub Lee;Insu Kim;Sangeun Oh;Hyosu Kim","doi":"10.1109/TMC.2024.3502902","DOIUrl":"https://doi.org/10.1109/TMC.2024.3502902","url":null,"abstract":"PIN authentication is the first line of defense for protecting private data on many smartphone applications, such as lock screens, messengers, and banking apps. However, existing PIN authentication systems have several constraints regarding security, usability, and robustness. To go beyond their limitations, this paper presents TaPIN, a reliable system that authenticates smartphone users with the collaborative use of PINs and tap biometrics. A user is first instructed to enter her PIN by tapping a smartphone screen for authentication. During the PIN entry, the user's fingertip collides with the screen, producing user-specific vibration and sound signals. TaPIN then senses the tap-induced signals and the collision properties, e.g., pressures and sizes, using the smartphone's built-in sensors and leverages them as biometric features. That is, it authenticates the user by verifying not only the entered PIN but also the collected features. Our experiments with 20 real-world users demonstrate that this two-factor authentication system is easy to use, more secure than existing methods, and deployable without dedicated hardware. For example, it accurately authenticates users with an average EER of 1.9% in stationary environments and maintains a reasonable level of security regardless of devices, tap styles, and noise.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2519-2533"},"PeriodicalIF":7.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Wang;Hui Gao;Edith C. H. Ngai;Kun Niu;Tan Yang;Bo Zhang;Wendong Wang
{"title":"A Coverage-Aware High-Quality Sensing Data Collection Method in Mobile Crowd Sensing","authors":"Ye Wang;Hui Gao;Edith C. H. Ngai;Kun Niu;Tan Yang;Bo Zhang;Wendong Wang","doi":"10.1109/TMC.2024.3502158","DOIUrl":"https://doi.org/10.1109/TMC.2024.3502158","url":null,"abstract":"In this paper, we leverage unmanned aerial vehicles (UAVs) to enhance mobile crowd sensing (MCS) by addressing two critical challenges: uncontrollable data quality and inevitable unsensed points of interest (PoIs). We introduce a UAV-assisted method to deal with these challenges. To ensure the accuracy of sensing data contributed by human participants, the proposed truth discovery method utilizes UAV-collected sensing data as few-shot samples to train the truth discovery model, which is then employed to calibrate sensing data solely collected by human participants. Additionally, to meet the sensing coverage requirement, we present a method that predicts data values for unsensed PoIs by utilizing their historical sensing data and the sensed neighboring PoIs information. The method employs a graph neural network to capture spatio-temporal relationships of the sensing data, facilitating accurate estimation of unsensed PoIs. Through extensive simulations, our approaches demonstrate superior performance compared to existing methods, showcasing the potential of UAV-assisted MCS for overcoming challenges and enhancing data collection efficiency in various domains.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3025-3040"},"PeriodicalIF":7.7,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To Optimize Human-in-the-Loop Learning in Repeated Routing Games","authors":"Hongbo Li;Lingjie Duan","doi":"10.1109/TMC.2024.3502076","DOIUrl":"https://doi.org/10.1109/TMC.2024.3502076","url":null,"abstract":"Today navigation applications (e.g., Waze and Google Maps) enable human users to learn and share the latest traffic observations, yet such information sharing simply aids selfish users to predict and choose the shortest paths to jam each other. Prior routing game studies focus on myopic users in oversimplified one-shot scenarios to regulate selfish routing via information hiding or pricing mechanisms. For practical human-in-the-loop learning (HILL) in repeated routing games, we face non-myopic users of differential past observations and need new mechanisms (preferably non-monetary) to persuade users to adhere to the optimal path recommendations. We model the repeated routing game in a typical parallel transportation network, which generally contains one deterministic path and <inline-formula><tex-math>$N$</tex-math></inline-formula> stochastic paths. We first prove that no matter under the information sharing mechanism in use or the latest routing literature’s hiding mechanism, the resultant price of anarchy (PoA) for measuring the efficiency loss from social optimum can approach infinity, telling arbitrarily poor exploration-exploitation tradeoff over time. Then we propose a novel user-differential probabilistic recommendation (UPR) mechanism to differentiate and randomize path recommendations for users with differential learning histories. We prove that our UPR mechanism ensures interim individual rationality for all users and significantly reduces <inline-formula><tex-math>$text{PoA}=infty$</tex-math></inline-formula> to close-to-optimal <inline-formula><tex-math>$text{PoA}=1+frac{1}{4N+3}$</tex-math></inline-formula>, which cannot be further reduced by any other non-monetary mechanism. In addition to theoretical analysis, we conduct extensive experiments using real-world datasets to generalize our routing graphs and validate the close-to-optimal performance of UPR mechanism.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2889-2899"},"PeriodicalIF":7.7,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Latency-Energy Efficient Task Offloading in the Satellite Network-Assisted Edge Computing via Deep Reinforcement Learning","authors":"Jian Zhou;Juewen Liang;Lu Zhao;Shaohua Wan;Hui Cai;Fu Xiao","doi":"10.1109/TMC.2024.3502643","DOIUrl":"https://doi.org/10.1109/TMC.2024.3502643","url":null,"abstract":"As the demand for global computing coverage continues to surge, satellite edge computing emerges as a pivotal technology for the next generation of networks. Unlike ground-based edge computing, Low Earth Orbit (LEO) satellites face distinctive challenges, including high-speed mobility and resource limitations, etc. Therefore, effectively utilizing LEO satellites for global coverage services is crucial but challenging due to their dynamic coverage areas and diverse task requirements. To address these challenges, we introduce a novel dual-cloud edge collaborative task offloading architecture in the satellite network-assisted edge computing environment, namely, <underline>S</u>atellite-<underline>G</u>round <underline>T</u>ask <underline>O</u>ffloading (<italic>SGTO</i>). The architecture employs a Geostationary Earth Orbit (GEO) satellite and a ground cloud computing center as satellite cloud and ground cloud, respectively, and LEO satellites as edge nodes. We formally define the task offloading problem in the <italic>SGTO</i> with the aim of minimizing the average latency and average energy consumption. We then propose an adaptive approach named <italic>SGTO-A</i> from the perspective of satellites to adaptively solve the problem leveraging deep reinforcement learning. Specifically, we transform the task offloading problem into a Markov decision process and adopt the generalized proximal policy optimization (<italic>GePPO</i>) algorithm to solve the problem. Finally, experimental results demonstrate that <italic>SGTO</i> architecture and <italic>SGTO-A</i> outperform the representative approaches in terms of average latency, average energy consumption and running time.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2644-2659"},"PeriodicalIF":7.7,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distributed Learn-to-Optimize: Limited Communications Optimization Over Networks via Deep Unfolded Distributed ADMM","authors":"Yoav Noah;Nir Shlezinger","doi":"10.1109/TMC.2024.3502574","DOIUrl":"https://doi.org/10.1109/TMC.2024.3502574","url":null,"abstract":"Distributed optimization is a fundamental framework for collaborative inference over networks. The operation is modeled as the joint minimization of a shared objective which typically depends on local observations. Distributed optimization algorithms, such as the distributed alternating direction method of multipliers (D-ADMM), iteratively combine local computations and message exchanges. A main challenge associated with distributed optimization, and particularly with D-ADMM, is that it requires a large number of communications to reach consensus. In this work we propose <italic>unfolded D-ADMM</i>, which follows the emerging deep unfolding methodology to enable D-ADMM to operate reliably with a predefined and small number of messages exchanged by each agent. Unfolded D-ADMM fully preserves the operation of D-ADMM, while leveraging data to tune the hyperparameters of each iteration. These hyperparameters can either be agent-specific, aiming at achieving the best performance within a fixed number of iterations over a given network, or shared among the agents, allowing to learn to distributedly optimize over different networks. We specialize unfolded D-ADMM for two representative settings: a distributed sparse recovery setup, and a distributed machine learning learning scenario. Our numerical results demonstrate that the proposed approach dramatically reduces the number of communications utilized by D-ADMM, without compromising on its performance.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3012-3024"},"PeriodicalIF":7.7,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Task Assignment in Spatial Crowdsourcing: A Human-in-The-Loop Approach","authors":"Qingshun Wu;Yafei Li;Jinxing Yan;Mei Zhang;Jianliang Xu;Mingliang Xu","doi":"10.1109/TMC.2024.3501734","DOIUrl":"https://doi.org/10.1109/TMC.2024.3501734","url":null,"abstract":"In recent years, adaptive task assignment has been explored in spatial crowdsourcing. The challenge lies in how to adaptively partition the task stream to achieve the best utility for task assignment. A number of existing works have attempted to solve this challenge and achieve better performance by utilizing learning-based methods. Specifically, they mainly employ reinforcement learning to divide the task stream into a series of suitable batches and then perform task assignment in a batch fashion. Drawing inspiration from the effectiveness of human-machine collaborative decision-making, we aim to investigate human-in-the-loop methods to further enhance the performance of adaptive task assignment. In this paper, we propose a novel framework called Human-in-the-Loop Adaptive Partition (HLAP), which consists of two primary modules: Reinforcement Learning Partition Decision (RL-PD) and Human Supervision and Guidance (HSG). In the RL-PD module, we develop an RL agent, referred to as the decision-maker, by integrating the dual attention network into the Deep Q-Network (DQN) algorithm to capture cross-dimensional contextual information and long-range dependencies for a better understanding of the environment. In the HSG module, we design a human-in-the-loop mechanism to optimize the performance of the decision-maker, focusing on addressing two key issues: when and how humans interact with the decision-maker. Furthermore, to alleviate the heavy workload on humans, we construct a supervisor based on RL to oversee the decision-maker's partition process and adaptively determine when human intervention is necessary. We conduct extensive experiments on two real-world datasets, and the results demonstrate the efficiency and effectiveness of the HLAP framework.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2726-2739"},"PeriodicalIF":7.7,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suhong Wang;Tuanfa Qin;Tingting Chen;Wenhao Guo;Yongle Hu;Hongmin Sun
{"title":"A High-Reliability Small-Area Task Offloading Mechanism With Trust Evaluation and Fuzzy Logic in Power IoTs","authors":"Suhong Wang;Tuanfa Qin;Tingting Chen;Wenhao Guo;Yongle Hu;Hongmin Sun","doi":"10.1109/TMC.2024.3502167","DOIUrl":"https://doi.org/10.1109/TMC.2024.3502167","url":null,"abstract":"In order to solve the problem that high-priority tasks can not be processed timely and reliably due to the disorder of multi-task and dynamicity in Power Internet of Things(PIoTs), a high-reliability small-area task offloading mechanism with trust evaluation and fuzzy logic(HRSATF) is proposed. First, considering task priority, preemptive priority queue is introduced to ensure high-priority tasks processed preferentially, and minimum resource allocation coefficients(MRACs) of tasks are solved to ensure the effectiveness of offloading. Second, the trust model between smart device(SD) and edge server(ES) is established, and ESs are divided into three priorities based on trust value and computing power by fast non-dominated sorting. Thirdly, fuzzy logic is applied to select target ES when the priorities of task and ES do not match or the ES is offline, and MRAC is used to schedule tasks between SD and ES. Finally, NSGA2 is modified (MNSGA2) to verify the effectiveness of HRSATF in terms of success rate, time, power consumption and load balancing, where success rate is increased by <inline-formula><tex-math>$102.3%$</tex-math></inline-formula>, and time and power consumption are decreased by <inline-formula><tex-math>$90.7%$</tex-math></inline-formula>, <inline-formula><tex-math>$89.3%$</tex-math></inline-formula> at most, respectively.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2935-2948"},"PeriodicalIF":7.7,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}