Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

筛选
英文 中文
SeRaNDiP: Leveraging Inherent Sensor Random Noise for Differential Privacy Preservation in Wearable Community Sensing Applications SeRaNDiP:在可穿戴社区传感应用中利用固有传感器随机噪声进行差分隐私保护
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3596252
Ayanga Imesha Kumari Kalupahana, A. N. Balaji, X. Xiao, L. Peh
{"title":"SeRaNDiP: Leveraging Inherent Sensor Random Noise for Differential Privacy Preservation in Wearable Community Sensing Applications","authors":"Ayanga Imesha Kumari Kalupahana, A. N. Balaji, X. Xiao, L. Peh","doi":"10.1145/3596252","DOIUrl":"https://doi.org/10.1145/3596252","url":null,"abstract":"","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"81 1","pages":"61:1-61:38"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75906840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TwinkleTwinkle: Interacting with Your Smart Devices by Eye Blink TwinkleTwinkle:通过眨眼与智能设备互动
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3596238
Haiming Cheng, W. Lou, Yanni Yang, Yi-pu Chen, Xinyu Zhang
{"title":"TwinkleTwinkle: Interacting with Your Smart Devices by Eye Blink","authors":"Haiming Cheng, W. Lou, Yanni Yang, Yi-pu Chen, Xinyu Zhang","doi":"10.1145/3596238","DOIUrl":"https://doi.org/10.1145/3596238","url":null,"abstract":"Recent years have witnessed the rapid boom of mobile devices interweaving with changes the epidemic has made to people’s lives. Though a tremendous amount of novel human-device interaction techniques have been put forward to facilitate various audiences and scenarios, limitations and inconveniences still occur to people having difficulty speaking or using their fingers/hands/arms or wearing masks/glasses/gloves. To fill the gap of such interaction contexts beyond using hands, voice, face, or mouth, in this work, we take the first step to propose a novel Human-Computer Interaction (HCI) system, TwinkleTwinkle , which senses and recognizes eye blink patterns in a contact-free and training-free manner leveraging ultrasound signals on commercial devices. TwinkleTwinkle first applies a phase difference based approach to depicting candidate eye blink motion profiles without removing any noises, followed by modeling intrinsic characteristics of blink motions through adaptive constraints to separate tiny patterns from interferences in conditions where blink habits and involuntary movements vary between individuals. We propose a vote-based approach to get final patterns designed to map with number combinations either self-defined or based on carriers like ASCII code and Morse code to make interaction seamlessly embedded with normal and well-known language systems. We implement TwinkleTwinkle on smartphones with all methods realized in the time domain and conduct extensive evaluations in various settings. Results show that TwinkleTwinkle achieves about 91% accuracy in recognizing 23 blink patterns among different people.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"23 1","pages":"50:1-50:30"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81728422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VoiceCloak: Adversarial Example Enabled Voice De-Identification with Balanced Privacy and Utility VoiceCloak:具有平衡隐私和效用的对抗性示例启用语音去识别
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3596266
Meng Chen, Liwang Lu, Junhao Wang, Jiadi Yu, Ying Chen, Zhibo Wang, Zhongjie Ba, Feng Lin, Kui Ren
{"title":"VoiceCloak: Adversarial Example Enabled Voice De-Identification with Balanced Privacy and Utility","authors":"Meng Chen, Liwang Lu, Junhao Wang, Jiadi Yu, Ying Chen, Zhibo Wang, Zhongjie Ba, Feng Lin, Kui Ren","doi":"10.1145/3596266","DOIUrl":"https://doi.org/10.1145/3596266","url":null,"abstract":"Faced with the threat of identity leakage during voice data publishing, users are engaged in a privacy-utility dilemma when enjoying the utility of voice services. Existing machine-centric studies employ direct modification or text-based re-synthesis to de-identify users’ voices but cause inconsistent audibility for human participants in emerging online communication scenarios, such as virtual meetings. In this paper, we propose a human-centric voice de-identification system, VoiceCloak , which uses adversarial examples to balance the privacy and utility of voice services. Instead of typical additive examples inducing perceivable distortions, we design a novel convolutional adversarial example that modulates perturbations into real-world room impulse responses. Benefiting from this, VoiceCloak could preserve user identity from exposure by Automatic Speaker Identification (ASI), while remaining the voice perceptual quality for non-intrusive de-identification. Moreover, VoiceCloak learns a compact speaker distribution through a conditional variational auto-encoder to synthesize diverse targets on demand. Guided by these pseudo targets, VoiceCloak constructs adversarial examples in an input-specific manner, enabling any-to-any identity transformation for robust de-identification. Experimental results show that VoiceCloak could achieve over 92% and 84% successful de-identification on mainstream ASIs and commercial systems with excellent voiceprint consistency, speech integrity, and audio quality.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"77 1","pages":"48:1-48:21"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82624188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
UQRCom: Underwater Wireless Communication Based on QR Code UQRCom:基于二维码的水下无线通信
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-12-21 DOI: 10.1145/3571588
Xinyang Liu, Lei Wang, Jie Xiong, Chi Lin, Xinhua Gao, Jiale Li, Yibo Wang
{"title":"UQRCom: Underwater Wireless Communication Based on QR Code","authors":"Xinyang Liu, Lei Wang, Jie Xiong, Chi Lin, Xinhua Gao, Jiale Li, Yibo Wang","doi":"10.1145/3571588","DOIUrl":"https://doi.org/10.1145/3571588","url":null,"abstract":"While communication in the air has been a norm with the pervasiveness of WiFi and LTE infrastructure, underwater communication still faces a lot of challenges. Even nowadays, the main communication method for divers in underwater environment is hand gesture. There are multiple issues associated with gesture-based communication including limited amount of information and ambiguity. On the other hand, traditional RF-based wireless communication technologies which have achieved great success in the air can hardly work in underwater environment due to the extremely severe attenuation. In this paper, we propose UQRCom, an underwater wireless communication system designed for divers. We design a UQR code which stems from QR code and address the unique challenges in underwater environment such as color cast, contrast reduction and light interfere. With both real-world experiments and simulation, we show that the proposed system can achieve robust real-time communication in underwater environment. For UQR codes with a size of 19.8 cm x 19.8 cm, the communication distance can be 11.2 m and the achieved data rate (6.9 kbps ~ 13.6 kbps) is high enough for voice communication between divers.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"18 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87485709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
MotorBeat: Acoustic Communication for Home Appliances via Variable Pulse Width Modulation 机动节拍:通过可变脉宽调制的家用电器声学通信
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-09-30 DOI: 10.1145/3517255
Weiguo Wang, Jinming Li, Yuan He, Xiuzhen Guo, Yunhao Liu
{"title":"MotorBeat: Acoustic Communication for Home Appliances via Variable Pulse Width Modulation","authors":"Weiguo Wang, Jinming Li, Yuan He, Xiuzhen Guo, Yunhao Liu","doi":"10.1145/3517255","DOIUrl":"https://doi.org/10.1145/3517255","url":null,"abstract":"More and more home appliances are now connected to the Internet, thus enabling various smart home applications. However, a critical problem that may impede the further development of smart home is overlooked: Small appliances account for the majority of home appliances, but they receive little attention and most of them are cut off from the Internet. To fill this gap, we propose MotorBeat, an acoustic communication approach that connects small appliances to a smart speaker. Our key idea is to exploit direct current (DC) motors, which are common components of small appliances, to transmit acoustic messages. We design a novel scheme named Variable Pulse Width Modulation (V-PWM) to drive DC motors. MotorBeat achieves the following 3C goals: (1) Comfortable to hear, (2) Compatible with multiple motor modes, and (3) Concurrent transmission. We implement MotorBeat with commercial devices and evaluate its performance on three small appliances and ten DC motors. The results show that the communication range can be up to 10 m","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"49 1","pages":"31:1-31:24"},"PeriodicalIF":0.0,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80254080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MuteIt: Jaw Motion Based Unvoiced Command Recognition Using Earable MuteIt:使用Earable基于下颌运动的静音命令识别
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-09-06 DOI: 10.1145/3550281
Tanmay Srivastava, Prerna Khanna, Shijia Pan, Phuc Nguyen, S. Jain
{"title":"MuteIt: Jaw Motion Based Unvoiced Command Recognition Using Earable","authors":"Tanmay Srivastava, Prerna Khanna, Shijia Pan, Phuc Nguyen, S. Jain","doi":"10.1145/3550281","DOIUrl":"https://doi.org/10.1145/3550281","url":null,"abstract":"In this paper, we present MuteIt, an ear-worn system for recognizing unvoiced human commands. MuteIt presents an intuitive alternative to voice-based interactions that can be unreliable in noisy environments, disruptive to those around us, and compromise our privacy. We propose a twin-IMU set up to track the user's jaw motion and cancel motion artifacts caused by head and body movements. MuteIt processes jaw motion during word articulation to break each word signal into its constituent syllables, and further each syllable into phonemes (vowels, visemes, and plosives). Recognizing unvoiced commands by only tracking jaw motion is challenging. As a secondary articulator, jaw motion is not distinctive enough for unvoiced speech recognition. MuteIt combines IMU data with the anatomy of jaw movement as well as principles from linguistics, to model the task of word recognition as an estimation problem. Rather than employing machine learning to train a word classifier, we reconstruct each word as a sequence of phonemes using a bi-directional particle filter, enabling the system to be easily scaled to a large set of words. We validate MuteIt for 20 subjects with diverse speech accents to recognize 100 common command words. MuteIt achieves a mean word recognition accuracy of 94.8% in noise-free conditions. When compared with common voice assistants, MuteIt outperforms them in noisy acoustic environments, achieving higher than 90% recognition accuracy. Even in the presence of motion artifacts, such as head movement, walking, and riding in a moving vehicle, MuteIt achieves mean word recognition accuracy of 91% over all scenarios.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"51 1","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86970635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
TransRisk: Mobility Privacy Risk Prediction based on Transferred Knowledge TransRisk:基于转移知识的移动隐私风险预测
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-07-04 DOI: 10.1145/3534581
Xiaoyang Xie, Zhiqing Hong, Zhou Qin, Zhihan Fang, Yuan Tian, Desheng Zhang
{"title":"TransRisk: Mobility Privacy Risk Prediction based on Transferred Knowledge","authors":"Xiaoyang Xie, Zhiqing Hong, Zhou Qin, Zhihan Fang, Yuan Tian, Desheng Zhang","doi":"10.1145/3534581","DOIUrl":"https://doi.org/10.1145/3534581","url":null,"abstract":"Human mobility data may lead to privacy concerns because a resident can be re-identified from these data by malicious attacks even with anonymized user IDs. For an urban service collecting mobility data, an efficient privacy risk assessment is essential for the privacy protection of its users. The existing methods enable efficient privacy risk assessments for service operators to fast adjust the quality of sensing data to lower privacy risk by using prediction models. However, for these prediction models, most of them require massive training data, which has to be collected and stored first. Such a large-scale long-term training data collection contradicts the purpose of privacy risk prediction for new urban services, which is to ensure that the quality of high-risk human mobility data is adjusted to low privacy risk within a short time. To solve this problem, we present a privacy risk prediction model based on transfer learning, i.e., TransRisk, to predict the privacy risk for a new target urban service through (1) small-scale short-term data of its own, and (2) the knowledge learned from data from other existing urban services. We envision the application of TransRisk on the traffic camera surveillance system and evaluate it with real-world mobility datasets already collected in a Chinese city, Shenzhen, including four source datasets, i.e., (i) one call detail record dataset (CDR) with 1.2 million users; (ii) one cellphone connection data dataset (CONN) with 1.2 million users; (iii) a vehicular GPS dataset (Vehicles) with 10 thousand vehicles; (iv) an electronic toll collection transaction dataset (ETC) with 156 thousand users, and a target dataset, i.e., a camera dataset (Camera) with 248 cameras. The results show that our model outperforms the state-of-the-art methods in terms of RMSE and MAE. Our work also provides valuable insights and implications on mobility data privacy risk assessment for both current and future large-scale services.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"10 1","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86751898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FocalPoint: Adaptive Direct Manipulation for Selecting Small 3D Virtual Objects FocalPoint:用于选择小型3D虚拟对象的自适应直接操作
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-03-27 DOI: 10.1145/3580856
Jiaju Ma, Jing Qian, Tongyu Zhou, Jeffson Huang
{"title":"FocalPoint: Adaptive Direct Manipulation for Selecting Small 3D Virtual Objects","authors":"Jiaju Ma, Jing Qian, Tongyu Zhou, Jeffson Huang","doi":"10.1145/3580856","DOIUrl":"https://doi.org/10.1145/3580856","url":null,"abstract":"We propose FocalPoint, a direct manipulation technique in smartphone augmented reality (AR) for selecting small densely-packed objects within reach, a fundamental yet challenging task in AR due to the required accuracy and precision. FocalPoint adaptively and continuously updates a cylindrical geometry for selection disambiguation based on the user's selection history and hand movements. This design is informed by a preliminary study which revealed that participants preferred selecting objects appearing in particular regions of the screen. We evaluate FocalPoint against a baseline direct manipulation technique in a 12-participant study with two tasks: selecting a 3 mm wide target from a pile of cubes and virtually decorating a house with LEGO pieces. FocalPoint was three times as accurate for selecting the correct object and 5.5 seconds faster on average; participants using FocalPoint decorated their houses more and were more satisfied with the result. We further demonstrate the finer control enabled by FocalPoint in example applications of robot repair, 3D modeling, and neural network visualizations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"35 1","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2022-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81308451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ForceSticker: Wireless, Batteryless, Thin & Flexible Force Sensors 力传感器:无线,无电池,薄和柔性力传感器
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-03-27 DOI: 10.1145/3580793
Agrim Gupta, D. Park, Shayaun Bashar, C. Girerd, Nagarjun Bhat, Siddhi Mundhra, Tania. K. Morimoto, Dinesh Bharadia
{"title":"ForceSticker: Wireless, Batteryless, Thin & Flexible Force Sensors","authors":"Agrim Gupta, D. Park, Shayaun Bashar, C. Girerd, Nagarjun Bhat, Siddhi Mundhra, Tania. K. Morimoto, Dinesh Bharadia","doi":"10.1145/3580793","DOIUrl":"https://doi.org/10.1145/3580793","url":null,"abstract":"Any two objects in contact with each other exert a force that could be simply due to gravity or mechanical contact, such as any ubiquitous object exerting weight on a platform or the contact between two bones at our knee joints. The most ideal way of capturing these contact forces is to have a flexible force sensor which can conform well to the contact surface. Further, the sensor should be thin enough to not affect the contact physics between the two objects. In this paper, we showcase the design of such thin, flexible sticker-like force sensors dubbed as 'ForceStickers', ushering into a new era of miniaturized force sensors. ForceSticker achieves this miniaturization by creating new class of capacitive force sensors which avoid both batteries, as well as wires. The wireless and batteryless readout is enabled via hybrid analog-digital backscatter, by piggybacking analog sensor data onto a digitally identified RFID link. Hence, ForceSticker finds natural applications in space and battery-constraint in-vivo usecases, like force-sensor backed orthopaedic implants, surgical robots. Further, ForceSticker finds applications in ubiquiti-constraint scenarios. For example, these force-stickers enable cheap, digitally readable barcodes that can provide weight information, with possible usecases in warehouse integrity checks. To meet these varied application scenarios, we showcase the general framework behind design of ForceSticker. With ForceSticker framework, we design 4mm*2mm sensor prototypes, with two different polymer layers of ecoflex and neoprene rubber, having force ranges of 0-6N and 0-40N respectively, with readout errors of 0.25, 1.6 N error each (<5% of max. force). Further, we stress test ForceSticker by >10,000 force applications without significant error degradation. We also showcase two case-studies onto the possible applications of ForceSticker: sensing forces from a toy knee-joint model and integrity checks of warehouse packaging.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"59 1","pages":"1 - 32"},"PeriodicalIF":0.0,"publicationDate":"2022-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80924679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RoVaR: Robust Multi-agent Tracking through Dual-layer Diversity in Visual and RF Sensing RoVaR:基于视觉和射频传感的双层分集鲁棒多智能体跟踪
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-03-27 DOI: 10.1145/3580854
Mallesham Dasari
{"title":"RoVaR: Robust Multi-agent Tracking through Dual-layer Diversity in Visual and RF Sensing","authors":"Mallesham Dasari","doi":"10.1145/3580854","DOIUrl":"https://doi.org/10.1145/3580854","url":null,"abstract":"The plethora of sensors in our commodity devices provides a rich substrate for sensor-fused tracking. Yet, today's solutions are unable to deliver robust and high tracking accuracies across multiple agents in practical, everyday environments - a feature central to the future of immersive and collaborative applications. This can be attributed to the limited scope of diversity leveraged by these fusion solutions, preventing them from catering to the multiple dimensions of accuracy, robustness (diverse environmental conditions) and scalability (multiple agents) simultaneously. In this work, we take an important step towards this goal by introducing the notion of dual-layer diversity to the problem of sensor fusion in multi-agent tracking. We demonstrate that the fusion of complementary tracking modalities, - passive/relative (e.g. visual odometry) and active/absolute tracking (e.g.infrastructure-assisted RF localization) offer a key first layer of diversity that brings scalability while the second layer of diversity lies in the methodology of fusion, where we bring together the complementary strengths of algorithmic (for robustness) and data-driven (for accuracy) approaches. ROVAR is an embodiment of such a dual-layer diversity approach that intelligently attends to cross-modal information using algorithmic and data-driven techniques that jointly share the burden of accurately tracking multiple agents in the wild. Extensive evaluations reveal ROVAR'S multi-dimensional benefits in terms of tracking accuracy, scalability and robustness to enable practical multi-agent immersive applications in everyday environments.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"309 1","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2022-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79937788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信