Proceedings of the 27th Annual International Conference on Mobile Computing and Networking最新文献

筛选
英文 中文
Loki: improving long tail performance of learning-based real-time video adaptation by fusing rule-based models 洛基:通过融合基于规则的模型,改善基于学习的实时视频自适应的长尾性能
Huanhuan Zhang, Anfu Zhou, Yuhan Hu, Chaoyue Li, Guangping Wang, Xinyu Zhang, Huadong Ma, Leilei Wu, Aiyun Chen, Changhui Wu
{"title":"Loki: improving long tail performance of learning-based real-time video adaptation by fusing rule-based models","authors":"Huanhuan Zhang, Anfu Zhou, Yuhan Hu, Chaoyue Li, Guangping Wang, Xinyu Zhang, Huadong Ma, Leilei Wu, Aiyun Chen, Changhui Wu","doi":"10.1145/3447993.3483259","DOIUrl":"https://doi.org/10.1145/3447993.3483259","url":null,"abstract":"Maximizing the quality of experience (QoE) for real-time video is a long-standing challenge. Traditional video transport protocols, represented by a few deterministic rules, can hardly adapt to the heterogeneous and highly dynamic modern Internet. Emerging learning-based algorithms have demonstrated potential to meet the challenge. However, our measurement study reveals an alarming long tail performance issue: these algorithms tend to be bottle-necked by occasional catastrophic events due to the built-in exploration mechanisms. In this work, we propose Loki, which improves the robustness of learning-based model by coherently integrating it with a rule-based algorithm. To enable integration at feature level, we first reverse-engineer the rule-based algorithm into an equivalent \"black-box\" neural network. Then, we design a dual-attention feature fusion mechanism to fuse it with a reinforcement learning model. We train Loki in a commercial real-time video system through online learning, and evaluate it over 101 million video sessions, in comparison to state-of-the-art rule-based and learning-based solutions. The results show that Loki improves not only the average but also the tail performance substantially (26.30% to 44.24% reduction of stall rate and 1.76% to 2.17% increase in video throughput at 95-percentile).","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126584179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Human perception-enhanced camera system for web conferences leveraging device motions 利用设备运动的网络会议的人类感知增强相机系统
Anish Shrestha, Zeyu Deng, Chen Wang
{"title":"Human perception-enhanced camera system for web conferences leveraging device motions","authors":"Anish Shrestha, Zeyu Deng, Chen Wang","doi":"10.1145/3447993.3510591","DOIUrl":"https://doi.org/10.1145/3447993.3510591","url":null,"abstract":"We present a demonstration of a human perception-enhanced camera system for web conferencing that protects the user's privacy. Given that people easily forget about their active camera during web conferences, the system advertises the camera's active status via its motions to remind users that they are being watched by others. This prevents inadvertent privacy leakage. The system is developed based on a motorized camera, which moves according to the user's head coordinates just like an eye is looking at the user's face in front of the desk rather than remotely or virtually. The basic idea is to exploit the original human body sense of environmental motions for human-camera interaction, which does not require looking straight at the camera or its LED light to actively check its status. In this demonstration, we showcase our implementation of the human perception-enhanced camera system and invite participants to use the system for web conferences (e.g., Zoom and Google Hangout), which illustrates the system's ability to extend the virtual social interaction to the physical world and the effectiveness of using the camera motion as a non-intrusive awareness indicator.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126889574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSA: fronthaul slicing architecture for 5G using dataplane programmable switches FSA:使用数据平面可编程交换机的5G前传切片架构
Nishant Budhdev, Raj Joshi, Pravein G. Kannan, M. Chan, T. Mitra
{"title":"FSA: fronthaul slicing architecture for 5G using dataplane programmable switches","authors":"Nishant Budhdev, Raj Joshi, Pravein G. Kannan, M. Chan, T. Mitra","doi":"10.1145/3447993.3483247","DOIUrl":"https://doi.org/10.1145/3447993.3483247","url":null,"abstract":"5G networks are gaining pace in development and deployment in recent years. One of 5G's key objective is to support a variety of use cases with different Service Level Objectives (SLOs). Slicing is a key part of 5G that allows operators to provide a tailored set of resources to different use cases in order to meet their SLOs. Existing works focus on slicing in the frontend or the C-RAN. However, slicing is missing in the fronthaul network that connects the frontend to the C-RAN. This leads to over-provisioning in the fronthaul and the C-RAN, and also limits the scalability of the network. In this paper, we design and implement Fronthaul Slicing Architecture (FSA), which to the best of our knowledge, is the first slicing architecture for the fronthaul network. FSA runs in the switch dataplane and uses information from the wireless schedule to identify the slice of a fronthaul data packet at line-rate. It enables multipoint-to-multipoint routing as well as packet prioritization to provide multiplexing gains in the fronthaul and the C-RAN, making the system more scalable. Our testbed evaluation using scaled-up LTE traces shows that FSA can support accurate multipoint-to-multipoint routing for 80 Gbps of fronthaul traffic. Further, the slice-aware packet scheduling enabled by FSA's packet prioritization reduces the 95th percentile Flowlet Completion Times (FCT) of latency-sensitive traffic by up to 4 times.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126324490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
SMART: screen-based gesture recognition on commodity mobile devices SMART:商品移动设备上基于屏幕的手势识别
Zimo Liao, Zhicheng Luo, Qianyi Huang, Linfeng Zhang, Fan Wu, Qian Zhang, Yi Wang
{"title":"SMART: screen-based gesture recognition on commodity mobile devices","authors":"Zimo Liao, Zhicheng Luo, Qianyi Huang, Linfeng Zhang, Fan Wu, Qian Zhang, Yi Wang","doi":"10.1145/3447993.3483243","DOIUrl":"https://doi.org/10.1145/3447993.3483243","url":null,"abstract":"In-air gesture control extends a touch screen and enables contact-less interaction, thus has become a popular research direction in the past few years. Prior work has implemented this functionality based on cameras, acoustic signals, and Wi-Fi via existing hardware on commercial devices. However, these methods have low user acceptance. Solutions based on cameras and acoustic signals raise privacy concerns, while WiFi-based solutions are vulnerable to background noise. As a result, these methods are not commercialized and recent flagship smartphones have implemented in-air gesture recognition by adding extra hardware on-board, such as mmWave radar and depth camera. The question is, can we support in-air gesture control on legacy devices without any hardware modifications? To answer this question, in this work, we propose SMART, an in-air gesture recognition system leveraging the screen and ambient light sensor (ALS), which are ordinary modalities on mobile devices. For the transmitter side, we design a screen display mechanism to embed spatial information and preserve the viewing experience; for the receiver side, we develop a framework to recognize gestures from low-quality ALS readings. We implement and evaluate SMART on both a tablet and several smartphones. Results show that SMART can recognize 9 types of frequently used in-air gestures with an average accuracy of 96.1%.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"1994 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125545455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Octopus: a practical and versatile wideband MIMO sensing platform 章鱼:一个实用和通用的宽带MIMO传感平台
Zhe Chen, Tianyue Zheng, Jun Luo
{"title":"Octopus: a practical and versatile wideband MIMO sensing platform","authors":"Zhe Chen, Tianyue Zheng, Jun Luo","doi":"10.1145/3447993.3483267","DOIUrl":"https://doi.org/10.1145/3447993.3483267","url":null,"abstract":"Radio frequency (RF) technologies have achieved a great success in data communication. In recent years, pervasive RF signals are further exploited for sensing; RF sensing has since attracted attentions from both academia and industry. Existing developments mainly employ commodity Wi-Fi hardware or rely on sophisticated SDR platforms. While promising in many aspects, there still remains a gap between lab prototypes and real-life deployments. On one hand, due to its narrow bandwidth and communication-oriented design, Wi-Fi sensing offers a coarse sensing granularity and its performance is very unstable in harsh real-world environments. On the other hand, SDR-based designs may hardly be adopted in practice due to its large size and high cost. To this end, we propose, design, and implement Octopus, a compact and flexible wideband MIMO sensing platform, built using commercial-grade low-power impulse radio. Octopus provides a standalone and fully programmable RF sensing solution; it allows for quick algorithm design and application development, and it specifically leverages the wideband radio to achieve a competent and robust performance in practice. We evaluate the performance of Octopus via micro-benchmarking, and further demonstrate its applicability using representative RF sensing applications, including passive localization, vibration sensing, and human/object imaging.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132523677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Wearable, untethered hands tracking with passive magnets 可穿戴的,不系绳的手用被动磁铁追踪
Dongyao Chen, Mingke Wang, Chenxi He, Qing Luo, Yasha Iravantchi, Alanson P. Sample, K. Shin, Xinbing Wang
{"title":"Wearable, untethered hands tracking with passive magnets","authors":"Dongyao Chen, Mingke Wang, Chenxi He, Qing Luo, Yasha Iravantchi, Alanson P. Sample, K. Shin, Xinbing Wang","doi":"10.1145/3447993.3511175","DOIUrl":"https://doi.org/10.1145/3447993.3511175","url":null,"abstract":"Accurate tracking of the hands and fingers allows users to employ natural gestures in various interactive applications, e.g., controller-free interaction in augmented reality. Hand tracking also supports health applications, such as monitoring face-touching, a common vector for infectious disease. However, for both types of applications, the utility of hand tracking is often limited by the impracticality of bulky tethered systems (e.g., instrumented gloves) or inherent limitations (e.g., Line of Sight or privacy concerns with vision-based systems). These limitations have severely restricted the adoption of hand tracking in real-world applications. We demonstrate MagX, a fully untethered on-body hand tracking system utilizing passive magnets and a novel magnetic sensing platform. Since passive magnets require no maintenance, they can be worn on the hands indefinitely, and only the sensor board needs recharging, akin to a smartwatch.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114895815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Video-based social distancing evaluation in the cosmos testbed pilot site 基于视频的社会距离评估在宇宙试验台试点站点
Mahshid Ghasemi, Zhengye Yang, Mingfei Sun, Hongzhe Ye, Zihao Xiong, Javad Ghaderi, Z. Kostić, G. Zussman
{"title":"Video-based social distancing evaluation in the cosmos testbed pilot site","authors":"Mahshid Ghasemi, Zhengye Yang, Mingfei Sun, Hongzhe Ye, Zihao Xiong, Javad Ghaderi, Z. Kostić, G. Zussman","doi":"10.1145/3447993.3510590","DOIUrl":"https://doi.org/10.1145/3447993.3510590","url":null,"abstract":"Social distancing can reduce infection rates in respiratory pandemics such as COVID-19, especially in dense urban areas. Hence, we used the PAWR COSMOS wireless edge-cloud testbed in New York City to design and evaluate two different approaches for social distancing analysis. The first, textbf{Auto}mated video-based textbf{S}ocial textbf{D}istancing textbf{A}nalyzer (textbf{Auto-SDA}), was designed to measure pedestrians compliance with social distancing protocols using street-level cameras. However, since using street-level cameras can raise privacy concerns, we also developed the textbf{B}ird's eye view textbf{S}ocial textbf{D}istancing textbf{A}nalyzer (textbf{B-SDA}) which uses bird's eye view cameras, thereby preserving pedestrians' privacy. Both Auto-SDA and B-SDA consist of multiple modules. This demonstration illustrates the roles of these modules and their overall performance in evaluating the compliance of pedestrians with social distancing protocols. Moreover, we demonstrate applying Auto-SDA and B-SDA on videos recorded from cameras deployed on the 2nd and 12th floor of Columbia's Mudd building, respectively.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126616412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
SMART: screen-based gesture recognition on commodity mobile devices SMART:商品移动设备上基于屏幕的手势识别
Zimo Liao, Zhicheng Luo, Qianyi Huang, Linfeng Zhang, Fan Wu, Qian Zhang, Yi Wang, Guihai Chen
{"title":"SMART: screen-based gesture recognition on commodity mobile devices","authors":"Zimo Liao, Zhicheng Luo, Qianyi Huang, Linfeng Zhang, Fan Wu, Qian Zhang, Yi Wang, Guihai Chen","doi":"10.1145/3447993.3511174","DOIUrl":"https://doi.org/10.1145/3447993.3511174","url":null,"abstract":"In-air gesture control extends a touch screen and enables contactless interaction, thus has become a popular research direction in the past few years. Prior work has implemented this functionality based on cameras, acoustic signals, and Wi-Fi via existing hardware on commercial devices. However, these methods have low user acceptance. Solutions based on cameras and acoustic signals raise privacy concerns, while WiFi-based solutions are vulnerable to background noise. As a result, these methods are not commercialized and recent flagship smartphones have implemented in-air gesture recognition by adding extra hardware on-board, such as mmWave radar and depth camera. The question is, can we support in-air gesture control on legacy devices without any hardware modifications? In this demo, we design and implement SMART, an in-air gesture recognition system leveraging the screen and ambient light sensor (ALS), which are ordinary modalities on mobile devices. We implement SMART on a tablet. Results show that SMART can recognize 9 types of frequently used in-air gestures with an average accuracy of 96.1%.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115458635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sonica
Boyan Ding, Jinghao Zhao, Zhaowei Tan, Songwu Lu
{"title":"Sonica","authors":"Boyan Ding, Jinghao Zhao, Zhaowei Tan, Songwu Lu","doi":"10.1145/3447993.3510589","DOIUrl":"https://doi.org/10.1145/3447993.3510589","url":null,"abstract":"In this demo, we describe Sonica, an open-source NB-IoT prototype platform. Both radio access and core network components are designed and implemented with the features and characteristics of NB-IoT into account. With its eNB and core network (EPC) components, Sonica can function as an NB-IoT testbed which interacts with commercial off-the-shelf NB-IoT devices. Moreover, Sonica provides a flexible framework that supports quick prototyping for MAC/PHY layers.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125584939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HAWK-i: a remote and lightweight thermal imaging-based crowd screening framework HAWK-i:远程轻量级热成像人群筛查框架
Linjie Gu, Zhe Yang, M. Mukherjee, Zhigeng Pan, Mian Guo, Xiushan Liu, Rakesh Matam, Jaime Lloret
{"title":"HAWK-i: a remote and lightweight thermal imaging-based crowd screening framework","authors":"Linjie Gu, Zhe Yang, M. Mukherjee, Zhigeng Pan, Mian Guo, Xiushan Liu, Rakesh Matam, Jaime Lloret","doi":"10.1145/3447993.3520260","DOIUrl":"https://doi.org/10.1145/3447993.3520260","url":null,"abstract":"In this demonstration, we present an end-to-end assistive method for human body temperature screening system starting from collecting raw data using a thermal camera to identify the suspected individual for combating communicable infectious diseases. We deploy a lightweight MobileNet v2 in resource-constrained Raspberry Pi 4B to detect the human's head and body from the thermal image and use a classifier to determine the temperature from the raw temperature data. The experiments show that although the detection accuracy is not very high, we can reduce the bottleneck from screening time and reduce the exposure for the individuals because of the reduced bottleneck.","PeriodicalId":177431,"journal":{"name":"Proceedings of the 27th Annual International Conference on Mobile Computing and Networking","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131396544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信