{"title":"DeepAd","authors":"Lien-Wu Chen, Wei-Chu Huang","doi":"10.1145/3447993.3482864","DOIUrl":"https://doi.org/10.1145/3447993.3482864","url":null,"abstract":"In this paper, we design and implement a deep advertising signage system, called DeepAd, with context-aware advertisement and cyber-physical interaction based on Internet of Things (IoT) technologies. In the DeepAd system, instant sensing and diverse interacting features are integrated with an IoT signage, which can (1) transmit multimedia contents and receive specific messages to/from smartphone users, (2) sense and interact with nearby individuals through image sensors, (3) embed context-aware advertisement information in sound waves, and (4) customize the on-screen 3D doll with an audience's face on demand. Through built-in sensors and smartphone interfaces, DeepAd can interact with nearby audiences via real-time multimedia services on the IoT signage in a click-and-drag manner. In addition, DeepAd investigates data-over-sound techniques to send embedded status-related advertisement via background music/voice. Furthermore, DeepAd explores deep learning based face changing and recognition to provide innovative and customized services to smartphone users. This paper demonstrates our current prototype consisting of the Android App, advertising server, and IoT signage.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125869269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Wu, Vimal Kakaraparthi, Zhuohang Li, Tien Pham, Jian Liu, Phuc Nguyen
{"title":"BioFace-3D: continuous 3d facial reconstruction through lightweight single-ear biosensors","authors":"Yi Wu, Vimal Kakaraparthi, Zhuohang Li, Tien Pham, Jian Liu, Phuc Nguyen","doi":"10.1145/3447993.3483252","DOIUrl":"https://doi.org/10.1145/3447993.3483252","url":null,"abstract":"Over the last decade, facial landmark tracking and 3D reconstruction have gained considerable attention due to their numerous applications such as human-computer interactions, facial expression analysis, and emotion recognition, etc. Traditional approaches require users to be confined to a particular location and face a camera under constrained recording conditions (e.g., without occlusions and under good lighting conditions). This highly restricted setting prevents them from being deployed in many application scenarios involving human motions. In this paper, we propose the first single-earpiece lightweight biosensing system, BioFace-3D, that can unobtrusively, continuously, and reliably sense the entire facial movements, track 2D facial landmarks, and further render 3D facial animations. Our single-earpiece biosensing system takes advantage of the cross-modal transfer learning model to transfer the knowledge embodied in a high-grade visual facial landmark detection model to the low-grade biosignal domain. After training, our BioFace-3D can directly perform continuous 3D facial reconstruction from the biosignals, without any visual input. Without requiring a camera positioned in front of the user, this paradigm shift from visual sensing to biosensing would introduce new opportunities in many emerging mobile and IoT applications. Extensive experiments involving 16 participants under various settings demonstrate that BioFace-3D can accurately track 53 major facial landmarks with only 1.85 mm average error and 3.38% normalized mean error, which is comparable with most state-of-the-art camera-based solutions. The rendered 3D facial animations, which are in consistency with the real human facial movements, also validate the system's capability in continuous 3D facial reconstruction.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130876190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RadioInLight","authors":"Minhao Cui, Qing Wang, Jie Xiong","doi":"10.1145/3447993.3483271","DOIUrl":"https://doi.org/10.1145/3447993.3483271","url":null,"abstract":"Visible Light Communication (VLC) is considered a new paradigm for next-generation wireless communication. Recently, studies show that during the process of VLC transmission, besides the visible light signals, the transmitter also leaks out RF signals through a side channel. What is interesting is that the data transmitted in the VLC channel can be inferred from the leaked RF signals. Fundamentally, it means the leaked RF signals carry a copy of the same data in the VLC channel. In this work, we show for the first time that besides inferring the original VLC data, the leaked side channel can be smartly leveraged to carry new data, significantly increasing the data rate of current VLC systems. To realize this objective, we propose a system named RadioInLight, with designs spanning across hardware and software. Without any dedicated active RF transmission front-end which consumes power and hardware resources, RadioInLight is able to double the data rate of the VLC system by purely manipulating the free passively leaked RF signals without affecting the data rate of the original VLC transmissions.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114624787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuangjiao Zhai, Zhanyong Tang, P. Nurmi, Dingyi Fang, Xiaojiang Chen, Z. Wang
{"title":"RISE: robust wireless sensing using probabilistic and statistical assessments","authors":"Shuangjiao Zhai, Zhanyong Tang, P. Nurmi, Dingyi Fang, Xiaojiang Chen, Z. Wang","doi":"10.1145/3447993.3483253","DOIUrl":"https://doi.org/10.1145/3447993.3483253","url":null,"abstract":"Wireless sensing builds upon machine learning shows encouraging results. However, adopting wireless sensing as a large-scale solution remains challenging as experiences from deployments have shown the performance of a machine-learned model to suffer when there are changes in the environment, e.g., when furniture is moved or when other objects are added or removed from the environment. We present Rise, a novel solution for enhancing the robustness and performance of learning-based wireless sensing techniques against such changes during a deployment. Rise combines probability and statistical assessments together with anomaly detection to identify samples that are likely to be misclassified and uses feedback on these samples to update a deployed wireless sensing model. We validate Rise through extensive empirical benchmarks by considering 11 representative sensing methods covering a broad range of wireless sensing tasks. Our results show that Rise can identify 92.3% of misclassifications on average. We showcase how Rise can be combined with incremental learning to help wireless sensing models retain their performance against dynamic changes in the operating environment to reduce the maintenance cost, paving the way for learning-based wireless sensing to become capable of supporting long-term monitoring in complex everyday environments.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124560142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sihan Wang, Guan-Hua Tu, Xi Lei, Tian Xie, Chi-Yu Li, Polun Chou, Fu-Cheng Hsieh, Yiwen Hu, Li Xiao, Chunyi Peng
{"title":"Insecurity of operational cellular IoT service: new vulnerabilities, attacks, and countermeasures","authors":"Sihan Wang, Guan-Hua Tu, Xi Lei, Tian Xie, Chi-Yu Li, Polun Chou, Fu-Cheng Hsieh, Yiwen Hu, Li Xiao, Chunyi Peng","doi":"10.1145/3447993.3483239","DOIUrl":"https://doi.org/10.1145/3447993.3483239","url":null,"abstract":"More than 150 cellular networks worldwide have rolled out massive IoT services such as smart metering and environmental monitoring. Such cellular IoT services share the existing cellular network architecture with non-IoT (e.g., smartphone) ones. When they are newly integrated into the cellular network, new security vulnerabilities may happen from imprudent integration. In this work, we explore the security vulnerabilities of the cellular IoT from both system-integrated and service-integrated aspects. We discover five vulnerabilities spanning cellular standard design defects, network operation slips, and IoT device implementation flaws. Threateningly, they allow an adversary to remotely identify IP addresses and phone numbers assigned to cellular IoT devices and launch data/text spamming attacks against them. We experimentally validate these vulnerabilities and attacks with three major U.S. IoT carriers. The attack evaluation result shows that the adversary can raise an IoT data bill by up to $226 with less than 120 MB spam traffic and increase an IoT text bill at a rate of $5 per second; moreover, cellular IoT devices may suffer from denial of IoT services. We finally propose, prototype, and evaluate recommended solutions.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115452103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyue Zheng, Zhe Chen, Jun Luo, Lin Ke, Chao Zhao, Yaowen Yang
{"title":"SiWa","authors":"Tianyue Zheng, Zhe Chen, Jun Luo, Lin Ke, Chao Zhao, Yaowen Yang","doi":"10.1145/3447993.3483258","DOIUrl":"https://doi.org/10.1145/3447993.3483258","url":null,"abstract":"Being able to see into walls is crucial for diagnostics of building health; it enables inspections of wall structure without undermining the structural integrity. However, existing sensing devices do not seem to offer a full capability in mapping the in-wall structure while identifying their status (e.g., seepage and corrosion). In this paper, we design and implement SiWa as a low-cost and portable system for wall inspections. Built upon a customized IR-UWB radar, SiWa scans a wall as a user swipes its probe along the wall surface; it then analyzes the reflected signals to synthesize an image and also to identify the material status. Although conventional schemes exist to handle these problems individually, they require troublesome calibrations that largely prevent them from practical adoptions. To this end, we equip SiWa with a deep learning pipeline to parse the rich sensory data. With innovative construction and training, the deep learning modules perform structural imaging and the subsequent analysis on material status, without the need for repetitive parameter tuning and calibrations. We build SiWa as a prototype and evaluate its performance via extensive experiments and field studies; results evidently confirm that SiWa accurately maps in-wall structures, identifies their materials, and detects possible defects, suggesting a promising solution for diagnosing building health with minimal effort and cost.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114966741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sunjae Lee, Hayeon Lee, Hoyoung Kim, Sangmin Lee, Jeongim Choi, Yuseung Lee, Seono Lee, Ahyeon Kim, J. Y. Song, Sangeun Oh, Steven Y. Ko, I. Shin
{"title":"FLUID-XP: flexible user interface distribution for cross-platform experience","authors":"Sunjae Lee, Hayeon Lee, Hoyoung Kim, Sangmin Lee, Jeongim Choi, Yuseung Lee, Seono Lee, Ahyeon Kim, J. Y. Song, Sangeun Oh, Steven Y. Ko, I. Shin","doi":"10.1145/3447993.3483245","DOIUrl":"https://doi.org/10.1145/3447993.3483245","url":null,"abstract":"Being able to use a single app across multiple devices can bring novel experiences to the users in various domains including entertainment and productivity. For instance, a user of a video editing app would be able to use a smart pad as a canvas and a smartphone as a remote toolbox so that the toolbox does not occlude the canvas during editing. However, existing approaches do not properly support the single-app multi-device execution due to several limitations, including high development cost, device heterogeneity, and high performance requirement. In this paper, we introduce FLUID-XP, a novel cross-platform multi-device system that enables UIs of a single app to be executed across heterogeneous platforms, while overcoming the limitations of previous approaches. FLUID-XP provides flexible, efficient, and seamless interactions by addressing three main challenges: i) how to transparently enable a single-display app to use multiple displays, ii) how to distribute UIs across heterogeneous devices with minimal network traffic, and iii) how to optimize the UI distribution process when multiple UIs have different distribution requirements. Our experiments with a working prototype of FLUID-XP on Android confirm that FLUID-XP successfully supports a variety of unmodified real-world apps across heterogeneous platforms (Android, iOS, and Linux). We also conduct a lab study with 25 participants to demonstrate the effectiveness of FLUID-XP with real users.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123073625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cong Shi, Xiangyu Xu, Tianfang Zhang, Pa Walker, Yi Wu, Jian Liu, Nitesh Saxena, Yingying Chen, Jiadi Yu
{"title":"Face-Mic: inferring live speech and speaker identity via subtle facial dynamics captured by AR/VR motion sensors","authors":"Cong Shi, Xiangyu Xu, Tianfang Zhang, Pa Walker, Yi Wu, Jian Liu, Nitesh Saxena, Yingying Chen, Jiadi Yu","doi":"10.1145/3447993.3483272","DOIUrl":"https://doi.org/10.1145/3447993.3483272","url":null,"abstract":"Augmented reality/virtual reality (AR/VR) has extended beyond 3D immersive gaming to a broader array of applications, such as shopping, tourism, education. And recently there has been a large shift from handheld-controller dominated interactions to headset-dominated interactions via voice interfaces. In this work, we show a serious privacy risk of using voice interfaces while the user is wearing the face-mounted AR/VR devices. Specifically, we design an eavesdropping attack, Face-Mic, which leverages speech-associated subtle facial dynamics captured by zero-permission motion sensors in AR/VR headsets to infer highly sensitive information from live human speech, including speaker gender, identity, and speech content. Face-Mic is grounded on a key insight that AR/VR headsets are closely mounted on the user's face, allowing a potentially malicious app on the headset to capture underlying facial dynamics as the wearer speaks, including movements of facial muscles and bone-borne vibrations, which encode private biometrics and speech characteristics. To mitigate the impacts of body movements, we develop a signal source separation technique to identify and separate the speech-associated facial dynamics from other types of body movements. We further extract representative features with respect to the two types of facial dynamics. We successfully demonstrate the privacy leakage through AR/VR headsets by deriving the user's gender/identity and extracting speech information via the development of a deep learning-based framework. Extensive experiments using four mainstream VR headsets validate the generalizability, effectiveness, and high accuracy of Face-Mic.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127637328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inhee Lee, Roger Hsiao, G. Carichner, Chin-Wei Hsu, Mingyu Yang, Sara Shoouri, Katherine Ernst, Tess Carichner, Yuyang Li, Jaechan Lim, Cole R. Julick, Eunseong Moon, Yi Sun, Jamie Phillips, K. Montooth, D. A. Green, Hun-Seok Kim, D. Blaauw
{"title":"mSAIL","authors":"Inhee Lee, Roger Hsiao, G. Carichner, Chin-Wei Hsu, Mingyu Yang, Sara Shoouri, Katherine Ernst, Tess Carichner, Yuyang Li, Jaechan Lim, Cole R. Julick, Eunseong Moon, Yi Sun, Jamie Phillips, K. Montooth, D. A. Green, Hun-Seok Kim, D. Blaauw","doi":"10.1145/3447993.3483263","DOIUrl":"https://doi.org/10.1145/3447993.3483263","url":null,"abstract":"Each fall, millions of monarch butterflies across the northern US and Canada migrate up to 4,000 km to overwinter in the exact same cluster of mountain peaks in central Mexico. To track monarchs precisely and study their navigation, a monarch tracker must obtain daily localization of the butterfly as it progresses on its 3-month journey. And, the tracker must perform this task while having a weight in the tens of milligram (mg) and measuring a few millimeters (mm) in size to avoid interfering with monarch's flight. This paper proposes mSAIL, 8 × 8 × 2.6 mm and 62 mg embedded system for monarch migration tracking, constructed using 8 prior custom-designed ICs providing solar energy harvesting, an ultra-low power processor, light/temperature sensors, power management, and a wireless transceiver, all integrated and 3D stacked on a micro PCB with an 8 × 8 mm printed antenna. The proposed system is designed to record and compress light and temperature data during the migration path while harvesting solar energy for energy autonomy, and wirelessly transmit the data at the overwintering site in Mexico, from which the daily location of the butterfly can be estimated using a deep learning-based localization algorithm. A 2-day trial experiment of mSAIL attached on a live butterfly in an outdoor botanical garden demonstrates the feasibility of individual butterfly localization and tracking.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"251 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114246570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongyao Chen, Mingke Wang, Chenxi He, Qing Luo, Yasha Iravantchi, Alanson P. Sample, K. Shin, Xinbing Wang
{"title":"MagX","authors":"Dongyao Chen, Mingke Wang, Chenxi He, Qing Luo, Yasha Iravantchi, Alanson P. Sample, K. Shin, Xinbing Wang","doi":"10.1145/3447993.3483260","DOIUrl":"https://doi.org/10.1145/3447993.3483260","url":null,"abstract":"Accurate tracking of the hands and fingers allows users to employ natural gestures in various interactive applications. Hand tracking also supports health applications, such as monitoring face-touching, a common vector for infectious disease. However, for both types of applications, the utility of hand tracking is often limited by the impracticality of bulky tethered systems (e.g., instrumented gloves) or inherent limitations (e.g., Line of Sight or privacy concerns with vision-based systems). These limitations have severely restricted the adoption of hand tracking in real-world applications. We present MagX, a fully untethered on-body hand tracking system utilizing passive magnets and a novel magnetic sensing platform. Since passive magnets require no maintenance, they can be worn on the hands indefinitely, and only the sensor board needs recharging, akin to a smartwatch. We used MagX to conduct a series of experiments, finding a wearable sensing array can achieve millimeter-accurate 5 DoF tracking of two magnets independently. For example, at 11 cm distance, a 6cm × 6cm sensing array can achieve positional and orientational errors of 0.76 cm and 0.11 rad. At 21 cm distance, the tracking errors are 2.65 cm and 0.41 rad. MagX can leverage larger sensor arrays for improved long-distance tracking, e.g., a 9.8cm × 9.8cm array can achieve 2.62 cm and 0.55 rad tracking performance on two magnets at 27 cm distance. The robust performance can facilitate ubiquitous adoption of magnetic tracking in various applications. Finally, MagX can perform all compute locally and only requires 0.38W total (220mW on the sensor platform plus 159mW on the computing unit) to perform real-time tracking, offering \"all day\" fully untethered operation on a typical smartwatch-sized battery.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117197153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}