Rui Han, Qinglong Zhang, C. Liu, Guoren Wang, Jian Tang, L. Chen
{"title":"LegoDNN: block-grained scaling of deep neural networks for mobile vision","authors":"Rui Han, Qinglong Zhang, C. Liu, Guoren Wang, Jian Tang, L. Chen","doi":"10.1145/3447993.3483249","DOIUrl":"https://doi.org/10.1145/3447993.3483249","url":null,"abstract":"Deep neural networks (DNNs) have become ubiquitous techniques in mobile and embedded systems for applications such as image/object recognition and classification. The trend of executing multiple DNNs simultaneously exacerbate the existing limitations of meeting stringent latency/accuracy requirements on resource constrained mobile devices. The prior art sheds light on exploring the accuracy-resource tradeoff by scaling the model sizes in accordance to resource dynamics. However, such model scaling approaches face to imminent challenges: (i) large space exploration of model sizes, and (ii) prohibitively long training time for different model combinations. In this paper, we present LegoDNN, a lightweight, block-grained scaling solution for running multi-DNN workloads in mobile vision systems. LegoDNN guarantees short model training times by only extracting and training a small number of common blocks (e.g. 5 in VGG and 8 in ResNet) in a DNN. At run-time, LegoDNN optimally combines the descendant models of these blocks to maximize accuracy under specific resources and latency constraints, while reducing switching overhead via smart block-level scaling of the DNN. We implement LegoDNN in TensorFlow Lite and extensively evaluate it against state-of-the-art techniques (FLOP scaling, knowledge distillation and model compression) using a set of 12 popular DNN models. Evaluation results show that LegoDNN provides 1,296x to 279,936x more options in model sizes without increasing training time, thus achieving as much as 31.74% improvement in inference accuracy and 71.07% reduction in scaling energy consumptions.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128716238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shiqi Jiang, Zhiqi Lin, Yuanchun Li, Yuanchao Shu, Yunxin Liu
{"title":"Flexible high-resolution object detection on edge devices with tunable latency","authors":"Shiqi Jiang, Zhiqi Lin, Yuanchun Li, Yuanchao Shu, Yunxin Liu","doi":"10.1145/3447993.3483274","DOIUrl":"https://doi.org/10.1145/3447993.3483274","url":null,"abstract":"Object detection is a fundamental building block of video analytics applications. While Neural Networks (NNs)-based object detection models have shown excellent accuracy on benchmark datasets, they are not well positioned for high-resolution images inference on resource-constrained edge devices. Common approaches, including down-sampling inputs and scaling up neural networks, fall short of adapting to video content changes and various latency requirements. This paper presents Remix, a flexible framework for high-resolution object detection on edge devices. Remix takes as input a latency budget, and come up with an image partition and model execution plan which runs off-the-shelf neural networks on non-uniformly partitioned image blocks. As a result, it maximizes the overall detection accuracy by allocating various amount of compute power onto different areas of an image. We evaluate Remix on public dataset as well as real-world videos collected by ourselves. Experimental results show that Remix can either improve the detection accuracy by 18%-120% for a given latency budget, or achieve up to 8.1× inference speedup with accuracy on par with the state-of-the-art NNs.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125347123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yili Ren, Zi Wang, Sheng Tan, Yingying Chen, Jie Yang
{"title":"Tracking free-form activity using wifi signals","authors":"Yili Ren, Zi Wang, Sheng Tan, Yingying Chen, Jie Yang","doi":"10.1145/3447993.3482857","DOIUrl":"https://doi.org/10.1145/3447993.3482857","url":null,"abstract":"WiFi human sensing has become increasingly attractive in enabling emerging human-computer interaction applications. The corresponding technique has gradually evolved from the classification of multiple activity types to more fine-grained tracking of 3D human poses. However, existing WiFi-based 3D human pose tracking is limited to a set of predefined activities. In this work, we present Winect, a 3D human pose tracking system for free-form activity using commodity WiFi devices. Our system tracks free-form activity by estimating a 3D skeleton pose that consists of a set of joints of the human body. In particular, Winect first identifies the moving limbs by leveraging the signals reflected off the human body and separates the entangled signals for each limb. Then, our system tracks each limb and constructs a 3D skeleton of the body by modeling the inherent relationship between the movements of the limb and the corresponding joints. Our evaluation results show that Winect achieves centimeter-level accuracy for free-form activity tracking under various environments.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124205797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and implementation of a generic 5G user plane function development framework","authors":"Cheng-Ying Hsieh, Yao-Wen Chang, Chien-Chia Chen, Jyh-cheng Chen","doi":"10.1145/3447993.3482867","DOIUrl":"https://doi.org/10.1145/3447993.3482867","url":null,"abstract":"In 5G, the requirement of transmission latency is stricter than that in 4G. To enhance transmission efficiency, a user plane function (UPF) with a specific packet processing mechanism is necessary. However, UPF must communicate with the session management function (SMF), which will send the packet processing rules to UPF. Those rules will substantially occupy UPF storage. Moreover, customizing a UPF needs to reconstruct N3, N4, N6, and N9 interfaces, which takes much time for developers. To this end, we propose the user plane function development framework (UPFDF), which modularizes the functions in the UPF, supporting customization to connect different types of packet processing mechanisms. With UPFDF, we address the UPF capacity problem and improve the flexibility of the system.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123660598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shengkun Tang, Cheng-Hsin Hsu, Zhigang Tian, Xin Su
{"title":"An aerodynamic, computer vision, and network simulator for networked drone applications","authors":"Shengkun Tang, Cheng-Hsin Hsu, Zhigang Tian, Xin Su","doi":"10.1145/3447993.3482862","DOIUrl":"https://doi.org/10.1145/3447993.3482862","url":null,"abstract":"We develop, implement, and demonstrate an open-source simulator, called AirSimN, for evaluating drone-based wireless networks in this extended abstract. AirSimN is different from all prior attempts in the literature because it concurrently supports aerodynamic, computer vision, and network simulations. We carefully design it to minimize the effort of realizing virtually arbitrary drone applications, thanks to the active and popular AirSim and NS-3 projects. Many mobile computing and wireless networking projects on, e.g., drone feedback controllers, drone vision algorithms, and 5G/6G cellular network planning, can leverage AirSimN for large-scale evaluations.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125700146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Illia Fedorin, Kostyantyn Slyusarenko, V. Pohribnyi, JongSeok Yoon, Gunguk Park, Hyunsu Kim
{"title":"Heart rate trend forecasting during high-intensity interval training using consumer wearable devices","authors":"Illia Fedorin, Kostyantyn Slyusarenko, V. Pohribnyi, JongSeok Yoon, Gunguk Park, Hyunsu Kim","doi":"10.1145/3447993.3482870","DOIUrl":"https://doi.org/10.1145/3447993.3482870","url":null,"abstract":"High-Intensity Interval Training is one of the most popular and dynamically developing fitness innovations in recent years. Professional runners have used interval training for a long time, alternating between high intensity sprints and low intensity jogging intervals to improve their overall performance. During such exercises, the accurate monitoring and prediction of heart rate dynamics is of particular importance to control the physiological state of a person and prevent possible pathological consequences. At the same time, heart rate estimation using very popular nowadays wearable devices (like smartwatches, fitness belts, etc.) during high-intensity exercises can be quite inaccurate. This inaccuracy mostly happens since the heart rate sensors (photoplethysmogram (PPG) and electrocardiogram (ECG)) are exposed to noises due to motion artifacts. PPG sensor suffers from periodic ambient light saturation due to intensive hand motions. ECG is noisy due to electrode contact area changes by body deformation. To solve the mentioned problem, in the current paper a deep learning framework for motion resistive heart rate estimation is developed. The system combines signal processing approaches for the raw sensor data processing and a deep learning architectures (convolutional and recurrent neural networks) for a real-time heart rate measurements and forecasting future heart rate dynamics.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122304595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suleman Khan, Pardeep Kumar, An Braeken, A. Gurtov
{"title":"Detection of evil flies: securing air-ground aviation communication","authors":"Suleman Khan, Pardeep Kumar, An Braeken, A. Gurtov","doi":"10.1145/3447993.3482869","DOIUrl":"https://doi.org/10.1145/3447993.3482869","url":null,"abstract":"The aviation community is employing various air traffic control and mobile communication technologies, such as ubiquitous data links, wireless communication architectures and protocols. Recently, software-defined networking (SDN) based architectures (i.e., cockpit network communications environment testing (COMET)) have been proposed for Air-Ground communication. However, an evil can break the communication between a pilot and air traffic control, resulting in a hazardous (or life-threatening) situation up in the air or failure of ground equipment. This paper proposes an efficient evil detection and prevention mechanism (called DoEF) for the COMET architecture. The proposed DoEF utilizes a deep learning-based approach, i.e., long-short term memory (LSTM), to detect the evil flies and provide possible countermeasures. Our preliminary results show that the proposed scheme reduces the detection time and increases the detection accuracy of distributed denial of service (DDoS) attacks for the aviation network.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133803632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gines Garcia-Aviles, Andres Garcia-Saavedra, M. Gramaglia, X. Costa, P. Serrano, A. Banchs
{"title":"Nuberu: reliable RAN virtualization in shared platforms","authors":"Gines Garcia-Aviles, Andres Garcia-Saavedra, M. Gramaglia, X. Costa, P. Serrano, A. Banchs","doi":"10.1145/3447993.3483266","DOIUrl":"https://doi.org/10.1145/3447993.3483266","url":null,"abstract":"RAN virtualization will become a key technology for the last mile of next-generation mobile networks driven by initiatives such as the O-RAN alliance. However, due to the computing fluctuations inherent to wireless dynamics and resource contention in shared computing infrastructure, the price to migrate from dedicated to shared platforms may be too high. Indeed, we show in this paper that the baseline architecture of a base station's distributed unit (DU) collapses upon moments of deficit in computing capacity. Recent solutions to accelerate some signal processing tasks certainly help but do not tackle the core problem: a DU pipeline that requires predictable computing to provide carrier-grade reliability. We present Nuberu, a novel pipeline architecture for 4G/5G DUs specifically engineered for non-deterministic computing platforms. Our design has one key objective to attain reliability: to guarantee a minimum set of signals that preserve synchronization between the DU and its users during computing capacity shortages and, provided this, maximize network throughput. To this end, we use techniques such as tight deadline control, jitter-absorbing buffers, predictive HARQ, and congestion control. Using an experimental prototype, we show that Nuberu attains >95% of the theoretical spectrum efficiency in hostile environments, where state-of-art approaches lose connectivity, and at least 80% resource savings.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115115089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Octopus","authors":"Zhe Chen, Tianyue Zheng, Jun Luo","doi":"10.2307/j.ctt1ffjdz1.28","DOIUrl":"https://doi.org/10.2307/j.ctt1ffjdz1.28","url":null,"abstract":"","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115715406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mir, Borja Genovés Guzmán, Ambuj Varshney, D. Giustiniano
{"title":"PassiveLiFi","authors":"M. Mir, Borja Genovés Guzmán, Ambuj Varshney, D. Giustiniano","doi":"10.1145/3447993.3483262","DOIUrl":"https://doi.org/10.1145/3447993.3483262","url":null,"abstract":"Light bulbs have been recently explored to design Light Fidelity (LiFi) communication to battery-free tags, thus complementing Radiofrequency (RF) backscatter in the uplink. In this paper, we show that LiFi and RF backscatter are complementary and have unexplored interactions. We introduce PassiveLiFi, a battery-free system that uses LiFi to transmit RF backscatter at a meagre power budget. We address several challenges on the system design in the LiFi transmitter, the tag and the RF receiver. We design the first LiFi transmitter that implements a chirp spread spectrum (CSS) using the visible light spectrum. We use a small bank of solar cells for communication and harvesting and reconfigure them based on the amount of harvested energy and desired data rate. We further alleviate the low responsiveness of solar cells with a new low-power receiver design in the tag. Experimental results with an RF carrier of 17 dBm show that we can generate RF backscatter with a range of 80.3 meters/μW consumed in the tag, which is almost double with respect to prior work.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122949166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}