{"title":"PDVocal","authors":"Hanbin Zhang, Chen Song, Aosen Wang, Chenhan Xu, Dongmei Li, Wenyao Xu","doi":"10.1145/3300061.3300125","DOIUrl":"https://doi.org/10.1145/3300061.3300125","url":null,"abstract":"Parkinson's disease (PD) is a chronic neurodegenerative disorder resulting from the progressive loss of dopaminergic nerve cells. People with PD usually demonstrate deficits in performing basic daily activities, and the relevant annual social cost can reach about $25 billion in the United States. Early detection of PD plays an important role in symptom relief and improvement in the performance of activities in daily life (ADL), which eventually reduces societal and economic burden. However, conventional PD detection methods are inconvenient in daily life (textite.g., requiring users to wear sensors). To overcome this challenge, we propose and identify the non-speech body sounds as the new PD biomarker, and utilize the data in smartphone usage to realize the passive PD detection in daily life without interrupting the user. Specifically, we present PDVocal, an end-to-end smartphone-based privacy-preserving system towards early PD detection. PDVocal can passively recognize the PD digital biomarkers in the voice data during daily phone conversation. At the user end, PDVocal filters the audio stream and only extracts the non-speech body sounds (textite.g., breathing, clearing throat and swallowing) which contain no privacy-sensitive content. At the cloud end, PDVocal analyzes the body sounds of interest and assesses the health condition using a customized residual network. For the sake of reliability in real-world PD detection, we investigate the method of the performance optimizer including an opportunistic learning knob and a long-term tracking protocol. We evaluate our proposed PDVocal on a collected dataset from 890 participants and real-life conversations from publicly available data sources. Results indicate that non-speech body sounds are a promising digital biomarker for privacy-preserving PD detection in daily life.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129539683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dahyun Kim, Muhammad Rusyadi Ramli, Jae-Min Lee, Dong‐Seong Kim
{"title":"Poster: SeamFarm -- Distributed Data Analytic for Precision Agriculture based on Seamless Computing","authors":"Dahyun Kim, Muhammad Rusyadi Ramli, Jae-Min Lee, Dong‐Seong Kim","doi":"10.1145/3300061.3343400","DOIUrl":"https://doi.org/10.1145/3300061.3343400","url":null,"abstract":"This work proposes a framework for distributed data analytic for precision agriculture based on seamless computing paradigm named SeamFarm. Generally, heterogeneous nodes deployed for precision agriculture where these nodes generated an extensive amount of data. Then machine learning can be used to analyze this data for precision agriculture. However, most of the IoT devices are resource-constrained devices, which results in poor performance while conducting a machine learning task. Thus, in SeamFarm, we consider distributing the data as well as the task to all available nodes. The results show that SeamFarm can meet all of the functional and non-functional requirements of distributed data analytic for precision agriculture. Moreover, it can obtain faster data analytic results.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115587290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anran Wang, Jacob E. Sunshine, Shyamnath Gollakota
{"title":"Contactless Infant Monitoring using White Noise","authors":"Anran Wang, Jacob E. Sunshine, Shyamnath Gollakota","doi":"10.1145/3300061.3345453","DOIUrl":"https://doi.org/10.1145/3300061.3345453","url":null,"abstract":"White noise machines are among the most popular devices to facilitate infant sleep. We introduce the first contactless system that uses white noise to achieve motion and respiratory monitoring in infants. Our system is designed for smart speakers that can monitor an infant's sleep using white noise. The key enabler underlying our system is a set of novel algorithms that can extract the minute infant breathing motion as well as position information from white noise which is random in both the time and frequency domain. We describe the design and implementation of our system, and present experiments with a life-like infant simulator as well as a clinical study at the neonatal intensive care unit with five new-born infants. Our study demonstrates that the respiratory rate computed by our system is highly correlated with the ground truth with a correlation coefficient of 0.938.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130735224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fei Ge, L. Tan, Xun Gao, Juan Luo, Wei Zhang, Ming Liu
{"title":"Poster: Enhancing Capacity in Multi-hop Wireless Networks by Joint Node Units","authors":"Fei Ge, L. Tan, Xun Gao, Juan Luo, Wei Zhang, Ming Liu","doi":"10.1145/3300061.3343390","DOIUrl":"https://doi.org/10.1145/3300061.3343390","url":null,"abstract":"Achievable capacity in multi-hop wireless networks is seriously lower than single-hop communication. Two full-duplex nodes potentially have 2× capacity in wireless communications, compared to two half-duplex nodes. Organize the two nodes as one unit and reorganize nodes to be the units in multi-hop paths, the capacity can achieve 1× to 2× under space division simultaneous transmission mode, which is verified by analysis and simulation results.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130741597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ling-Yan Zhang, Kun-Ru Wu, Ting-Yuan Ke, Chih-Hsiang Wang, Y. Tseng
{"title":"Demo: A ROS-based Robot with Distributed Sensors for Seamless People Tracking","authors":"Ling-Yan Zhang, Kun-Ru Wu, Ting-Yuan Ke, Chih-Hsiang Wang, Y. Tseng","doi":"10.1145/3300061.3343369","DOIUrl":"https://doi.org/10.1145/3300061.3343369","url":null,"abstract":"This paper presents a robot for people identification and tracking developed on robot operating system (ROS). It achieves modulized, light-weight, low-cost, and high-performance design goals even with the existence of distributed sensors. The key idea is to utilize wearable devices to enhance the people tracking capability of a robot through instant wireless communications and multi-sensory data fusion. Experimental results in a realistic environment demonstrate that our robot can keep tracking a specific person at a safe distance even without seeing the biological features of the person, who walks in a crowd with complex trajectories.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128124354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenqiang Chen, Lin Chen, Yandao Huang, Xinyu Zhang, Lu Wang, Rukhsana Ruby, Kaishun Wu
{"title":"Taprint","authors":"Wenqiang Chen, Lin Chen, Yandao Huang, Xinyu Zhang, Lu Wang, Rukhsana Ruby, Kaishun Wu","doi":"10.1145/3300061.3300124","DOIUrl":"https://doi.org/10.1145/3300061.3300124","url":null,"abstract":"Smart wristband has become a dominant device in the wearable ecosystem, providing versatile functions such as fitness tracking, mobile payment, and transport ticketing. However, the small form-factor, low-profile hardware interfaces and computational resources limit their capabilities in security checking. Many wristband devices have recently witnessed alarming vulnerabilities, e.g., personal data leakage and payment fraud, due to the lack of authentication and access control. To fill this gap, we propose a secure text pin input system, namely Taprint, which extends a virtual number pad on the back of a user's hand. Taprint builds on the key observation that the hand \"landmarks'', especially finger knuckles, bear unique vibration characteristics when being tapped by the user herself. It thus uses the tapping vibrometry as biometrics to authenticate the user, while distinguishing the tapping locations. Taprint reuses the inertial measurement unit in the wristband, \"overclocks'' its sampling rate to extrapolate fine-grained features, and further refines the features to enhance the uniqueness and reliability. Extensive experiments on 128 users demonstrate that Taprint achieves a high accuracy (96%) of keystrokes recognition. It can authenticate users, even through a single-tap, at extremely low error rate (2.4%), and under various practical usage disturbances.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116917962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Edge Assisted Real-time Object Detection for Mobile Augmented Reality","authors":"Luyang Liu, Hongyu Li, M. Gruteser","doi":"10.1145/3300061.3300116","DOIUrl":"https://doi.org/10.1145/3300061.3300116","url":null,"abstract":"Most existing Augmented Reality (AR) and Mixed Reality (MR) systems are able to understand the 3D geometry of the surroundings but lack the ability to detect and classify complex objects in the real world. Such capabilities can be enabled with deep Convolutional Neural Networks (CNN), but it remains difficult to execute large networks on mobile devices. Offloading object detection to the edge or cloud is also very challenging due to the stringent requirements on high detection accuracy and low end-to-end latency. The long latency of existing offloading techniques can significantly reduce the detection accuracy due to changes in the user's view. To address the problem, we design a system that enables high accuracy object detection for commodity AR/MR system running at 60fps. The system employs low latency offloading techniques, decouples the rendering pipeline from the offloading pipeline, and uses a fast object tracking method to maintain detection accuracy. The result shows that the system can improve the detection accuracy by 20.2%-34.8% for the object detection and human keypoint detection tasks, and only requires 2.24ms latency for object tracking on the AR device. Thus, the system leaves more time and computational resources to render virtual elements for the next frame and enables higher quality AR/MR experiences.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116900737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Poster: Video Chat Scam Detection Leveraging Screen Light Reflection","authors":"Hongbo Liu, Zhihua Li, Yucheng Xie, Ruizhe Jiang, Yan Wang, Xiaonan Guo, Yingying Chen","doi":"10.1145/3300061.3343403","DOIUrl":"https://doi.org/10.1145/3300061.3343403","url":null,"abstract":"The rapid advancement of social media and communication technology enables video chat to become an important and convenient way of daily communication. However, such convenience also makes personal video clips easily obtained and exploited by malicious users who launch scam attacks. Existing studies only deal with the attacks that use fabricated facial masks, while the liveness detection that targets the playback attacks using a virtual camera is still elusive. In this work, we develop a novel video chat liveness detection system, which can track the weak light changes reflected off the skin of a human face leveraging chromatic eigenspace differences. We design an inconspicuous challenge frame with minimal intervention to the video chat and develop a robust anomaly frame detector to verify the liveness of remote user in a video chat session. Furthermore, we propose a resilient defense strategy to defeat both naive and intelligent playback attacks leveraging spatial and temporal verification. The evaluation results show that our system can achieve accurate and robust liveness detection with the accuracy and false detection rate as high as 97.7% (94.8%) and 1% (1.6%) on smartphones (laptops), respectively.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124863538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Poster: Energy Efficient Mobile Video Transmission over Wireless Networks in IoT Applications","authors":"B. Cheng, Ming Wang, Junliang Chen","doi":"10.1145/3300061.3343383","DOIUrl":"https://doi.org/10.1145/3300061.3343383","url":null,"abstract":"Video surveillance is an important application of Internet of Thing (IoT) that provides convenience remote monitoring service to end users. How to reduce the power consumption of mobile devices with limited energy has become a research hotspot. We propose an energy-efficient architecture that includes smartphones' various power saving solutions for video transmission over wireless networks. Results demonstrate that each of our proposed solutions is significantly outperforms than the standard scheme.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123398522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Belal Korany, Chitra R. Karanam, H. Cai, Y. Mostofi
{"title":"XModal-ID: Using WiFi for Through-Wall Person Identification from Candidate Video Footage","authors":"Belal Korany, Chitra R. Karanam, H. Cai, Y. Mostofi","doi":"10.1145/3300061.3345437","DOIUrl":"https://doi.org/10.1145/3300061.3345437","url":null,"abstract":"In this paper, we propose XModal-ID, a novel WiFi-video cross-modal gait-based person identification system. Given the WiFi signal measured when an unknown person walks in an unknown area and a video footage of a walking person in another area, XModal-ID can determine whether it is the same person in both cases or not. XModal-ID only uses the Channel State Information (CSI) magnitude measurements of a pair of off-the-shelf WiFi transceivers. It does not need any prior wireless or video measurement of the person to be identified. Similarly, it does not need any knowledge of the operation area or person's track. Finally, it can identify people through walls. XModal-ID utilizes the video footage to simulate the WiFi signal that would be generated if the person in the video walked near a pair of WiFi transceivers. It then uses a new processing approach to robustly extract key gait features from both the real WiFi signal and the video-based simulated one, and compares them to determine if the person in the WiFi area is the same person in the video. We extensively evaluate XModal-ID by building a large test set with $8$ subjects, $2$ video areas, and $5$ WiFi areas, including 3 through-wall areas as well as complex walking paths, all of which are not seen during the training phase. Overall, we have a total of 2,256 WiFi-video test pairs. XModal-ID then achieves an $85%$ accuracy in predicting whether a pair of WiFi and video samples belong to the same person or not. Furthermore, in a ranking scenario where XModal-ID compares a WiFi sample to $8$ candidate video samples, it obtains top-1, top-2, and top-3 accuracies of $75%$, $90%$, and $97%$. These results show that XModal-ID can robustly identify new people walking in new environments, in various practical scenarios.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129827912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}