{"title":"Marauder","authors":"M. Ramanujam, H. Madhyastha, R. Netravali","doi":"10.1145/3458864.3466866","DOIUrl":"https://doi.org/10.1145/3458864.3466866","url":null,"abstract":"Hard-hitting guitar riffs accompanied by gradual, yet syncopated drumming. That is how Interpol has chosen to open their latest album, Marauders. Interpol is a band that I have followed since at least one of their songs was featured during an episode of Fox’s former hit teen soap, The O.C. Their sound back then sounded like a newer take on alternative, and with their latest release Marauders their sound seems to have remained aggressively alternative with some other influences mixed-in.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120993064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LensCap","authors":"Jinhan Hu, Andrei Iosifescu, R. Likamwa","doi":"10.1145/3458864.3467676","DOIUrl":"https://doi.org/10.1145/3458864.3467676","url":null,"abstract":"Augmented Reality (AR) enables smartphone users to interact with virtual content spatially overlaid on a continuously captured physical world. Under the current permission enforcement model in popular operating systems, AR apps are given Internet permission at installation time, and request camera permission and external storage write permission at runtime through a user's approval. With these permissions granted, any Internet-enabled AR app could silently collect camera frames and derived visual information for malicious intent without a user's awareness. This raises serious concerns about the disclosure of private user data in their living environments. To give users more control over application usage of their camera frames and the information derived from them, we introduce LensCap, a split-process app design framework, in which the app is split into a camera-handling visual process and a connectivity-handling network process. At runtime, LensCap manages secured communications between split processes, enacting fine-grained data usage monitoring. LensCap also allows both processes to present interactive user interfaces. With LensCap, users can decide what forms of visual data can be transmitted to the network, while still allowing visual data to be used for AR purposes on device. We prototype LensCap as an Android library and demonstrate its usability as a plugin in Unreal Engine. Performance evaluation results on five AR apps confirm that visual privacy can be preserved with an insignificant latency penalty (< 1.3 ms) at 60 FPS.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128185224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Ramprasad, Hongkai Chen, A. Veith, K. Truong, E. D. Lara
{"title":"Pain-o-vision, effortless pain management","authors":"B. Ramprasad, Hongkai Chen, A. Veith, K. Truong, E. D. Lara","doi":"10.1145/3458864.3466907","DOIUrl":"https://doi.org/10.1145/3458864.3466907","url":null,"abstract":"Chronic pain is often an ongoing challenge for patients to track and collect data. Pain-O-Vision is a smartwatch enabled pain management system that uses computer vision to capture the details of painful events from the user. A natural reaction to pain is to clench ones fist. The embedded camera is used to capture different types of fist clenching, to represent different levels of pain. An initial prototype was built on an Android smartwatch that uses a cloud-based classification service to detect the fist clench gestures. Our results show that it is possible to map a fist clench to different levels of pain which allows the patient to record the intensity of a painful event without carrying a specialized pain management device.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132059929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LATTE","authors":"H. Pasandi, T. Nadeem","doi":"10.5040/9781350122741.1001332","DOIUrl":"https://doi.org/10.5040/9781350122741.1001332","url":null,"abstract":"","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"51 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129997908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingyu Chen, Jia Liu, Fu Xiao, Shigang Chen, Lijun Chen
{"title":"Thermotag","authors":"Xingyu Chen, Jia Liu, Fu Xiao, Shigang Chen, Lijun Chen","doi":"10.1145/3458864.3467879","DOIUrl":"https://doi.org/10.1145/3458864.3467879","url":null,"abstract":"Temperature sensing plays a significant role in upholding quality assurance and meeting regulatory compliance in a wide variety of applications, such as fire safety and cold chain monitoring. However, existing temperature measurement devices are bulky, cost-prohibitive, or battery-powered, making item-level sensing and intelligence costly. In this paper, we present a novel tag-based thermometer called Thermotag, which uses a common passive RFID tag to sense the temperature with competitive advantages of being low-cost, battery-free, and robust to environmental conditions. The basic idea of Thermotag is that the resistance of a semiconductor diode in a tag's chip is temperature-sensitive. By measuring the discharging period through the reverse-polarized diode, we can estimate the temperature indirectly. We propose a standards-compliant measurement scheme of the discharging period by using a tag's volatile memory and build a mapping model between the discharging period and temperature for accurate and reliable temperature sensing. We implement Thermotag using a commercial off-the-shelf RFID system, with no need for any firmware or hardware modifications. Extensive experiments show that the temperature measurement has a large span ranging from 0 °C to 85 °C and a mean error of 2.7 °C.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130979467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timothy Woodford, Xinyu Zhang, Eugene Chai, K. Sundaresan, Amir Khojastepour
{"title":"SpaceBeam","authors":"Timothy Woodford, Xinyu Zhang, Eugene Chai, K. Sundaresan, Amir Khojastepour","doi":"10.1145/3458864.3466864","DOIUrl":"https://doi.org/10.1145/3458864.3466864","url":null,"abstract":"mmWave 5G networks promise to enable a new generation of networked applications requiring a combination of high throughput and ultra-low latency. However, in practice, mmWave performance scales poorly for large numbers of users due to the significant overhead required to manage the highly-directional beams. We find that we can substantially reduce or eliminate this overhead by using out-of-band infrared measurements of the surrounding environment generated by a LiDAR sensor. To accomplish this, we develop a ray-tracing system that is robust to noise and other artifacts from the infrared sensor, create a method to estimate the reflection strength from sensor data, and finally apply this information to the multiuser beam selection process. We demonstrate that this approach reduces beam-selection overhead by over 95% in indoor multi-user scenarios, reducing network latency by over 80% and increasing throughput by over 2× in mobile scenarios.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128375610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ThingSpire OS: a WebAssembly-based IoT operating system for cloud-edge integration","authors":"Borui Li, Hongchang Fan, Yi Gao, Wei Dong","doi":"10.1145/3458864.3466910","DOIUrl":"https://doi.org/10.1145/3458864.3466910","url":null,"abstract":"We advocate ThingSpire OS, a new IoT operating system based on WebAssembly for cloud-edge integration. By design, WebAssembly is considered as the first-class citizen in ThingSpire OS to achieve coherent execution among IoT device, edge and cloud. Furthermore, ThingSpire OS enables efficient execution of WebAssembly on resource-constrained devices by implementing a WebAssembly runtime based on Ahead-of-Time (AoT) compilation with a small footprint, achieves seamless inter-module communication wherever the modules locate, and leverages several optimizations such as lightweight preemptible invocation for memory isolation and control-flow integrity. We implement a prototype of ThingSpire OS and conduct preliminary evaluations on its inter-module communication performance.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115644493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hansi Liu, Abrar Alali, Mohamed Ibrahim, Hongyu Li, M. Gruteser, Shubham Jain, Kristin J. Dana, A. Ashok, Bin Cheng, Hongsheng Lu
{"title":"Lost and Found!: associating target persons in camera surveillance footage with smartphone identifiers","authors":"Hansi Liu, Abrar Alali, Mohamed Ibrahim, Hongyu Li, M. Gruteser, Shubham Jain, Kristin J. Dana, A. Ashok, Bin Cheng, Hongsheng Lu","doi":"10.1145/3458864.3466904","DOIUrl":"https://doi.org/10.1145/3458864.3466904","url":null,"abstract":"We demonstrate an application of finding target persons on a surveillance video. Each visually detected participant is tagged with a smartphone ID and the target person with the query ID is highlighted. This work is motivated by the fact that establishing associations between subjects observed in camera images and messages transmitted from their wireless devices can enable fast and reliable tagging. This is particularly helpful when target pedestrians need to be found on public surveillance footage, without the reliance on facial recognition. The underlying system uses a multi-modal approach that leverages WiFi Fine Timing Measurements (FTM) and inertial sensor (IMU) data to associate each visually detected individual with a corresponding smartphone identifier. These smartphone measurements are combined strategically with RGB-D information from the camera, to learn affinity matrices using a multi-modal deep learning network.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132866441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Open source RAN slicing on POWDER: a top-to-bottom O-RAN use case","authors":"David Johnson, Dustin Maas, J. Merwe","doi":"10.1145/3458864.3466912","DOIUrl":"https://doi.org/10.1145/3458864.3466912","url":null,"abstract":"This demonstration will showcase our efforts to develop a radio access network (RAN) slicing mechanism that is controllable via management software in an Open RAN framework. To our knowledge, our work represents the first effort that combines an open source Open RAN framework with an open source mobility stack, provides a top-to-bottom RAN application via the RAN intelligent control (RIC) provided by that framework and illustrates its functionality in a realistic wireless environment. Our software is publicly available and we provide a profile in the POWDER platform to enable others to replicate and build on our work.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115022649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongfei Xue, Yan Ju, Chenglin Miao, Yijiang Wang, Shiyang Wang, Aidong Zhang, Lu Su
{"title":"mmMesh","authors":"Hongfei Xue, Yan Ju, Chenglin Miao, Yijiang Wang, Shiyang Wang, Aidong Zhang, Lu Su","doi":"10.1145/3458864.3467679","DOIUrl":"https://doi.org/10.1145/3458864.3467679","url":null,"abstract":"In this paper, we present mmMesh, the first real-time 3D human mesh estimation system using commercial portable millimeter-wave devices. mmMesh is built upon a novel deep learning framework that can dynamically locate the moving subject and capture his/her body shape and pose by analyzing the 3D point cloud generated from the mmWave signals that bounce off the human body. The proposed deep learning framework addresses a series of challenges. First, it encodes a 3D human body model, which enables mmMesh to estimate complex and realistic-looking 3D human meshes from sparse point clouds. Second, it can accurately align the 3D points with their corresponding body segments despite the influence of ambient points as well as the error-prone nature and the multi-path effect of the RF signals. Third, the proposed model can infer missing body parts from the information of the previous frames. Our evaluation results on a commercial mmWave sensing testbed show that our mmMesh system can accurately localize the vertices on the human mesh with an average error of 2.47 cm. The superior experimental results demonstrate the effectiveness of our proposed human mesh construction system.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122659095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}