Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

筛选
英文 中文
Exergy: A Toolkit to Simplify Creative Applications of Wind Energy Harvesting 能源:简化风能收集创造性应用的工具包
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3580814
Jung Wook Park, Sienna Xin Sun, Tingyu Cheng, Dong Whi Yoo, Jiawei Zhou, Youngwook Do, G. Abowd, R. Arriaga
{"title":"Exergy: A Toolkit to Simplify Creative Applications of Wind Energy Harvesting","authors":"Jung Wook Park, Sienna Xin Sun, Tingyu Cheng, Dong Whi Yoo, Jiawei Zhou, Youngwook Do, G. Abowd, R. Arriaga","doi":"10.1145/3580814","DOIUrl":"https://doi.org/10.1145/3580814","url":null,"abstract":"Energy harvesting reduces the burden of power source maintenance and promises to make computing systems genuinely ubiquitous. Researchers have made inroads in this area, but their novel energy harvesting materials and fabrication techniques remain inaccessible to the general maker communities. Therefore, this paper aims to provide a toolkit that makes energy harvesting accessible to novices. In Study 1, we investigate the challenges and opportunities associated with devising energy harvesting technology with experienced researchers and makers (N=9). Using the lessons learned from this investigation, we design a wind energy harvesting toolkit, Exergy, in Study 2. It consists of a simulator, hardware tools, a software example, and ideation cards. We apply it to vehicle environments, which have yet to be explored despite their potential. In Study 3, we conduct a two-phase workshop: hands-on experience and ideation sessions. The results show that novices (N=23) could use Exergy confidently and invent self-sustainable energy harvesting applications creatively.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"20 1","pages":"25:1-25:28"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73004086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BabyNutri: A Cost-Effective Baby Food Macronutrients Analyzer Based on Spectral Reconstruction BabyNutri:基于光谱重建的高性价比婴儿食品常量营养素分析仪
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3580858
Haiyan Hu, Qianyi Huang, Qian Zhang
{"title":"BabyNutri: A Cost-Effective Baby Food Macronutrients Analyzer Based on Spectral Reconstruction","authors":"Haiyan Hu, Qianyi Huang, Qian Zhang","doi":"10.1145/3580858","DOIUrl":"https://doi.org/10.1145/3580858","url":null,"abstract":"The physical and physiological development of infants and toddlers requires the proper amount of macronutrient intake, making it an essential problem to estimate the macronutrient in baby food. Nevertheless, existing solutions are either too expensive or poor performing, preventing the widespread use of automatic baby nutrient intake logging. To narrow this gap, this paper proposes a cost-effective and portable baby food macronutrient estimation system, BabyNutri. BabyNutri exploits a novel spectral reconstruction algorithm to reconstruct high-dimensional informative spectra from low-dimensional spectra, which are available from low-cost spectrometers. We propose a denoising autoencoder for the reconstruction process, by which BabyNutri can reconstruct a 160-dimensional spectrum from a 5-dimensional spectrum. Since the high-dimensional spectrum is rich in light absorption features of macronutrients, it can achieve more accurate macronutrient estimation. In addition, considering that baby food contains complex ingredients, we also design a CNN nutrition estimation model with good generalization performance over various types of baby food. Our extensive experiments over 88 types of baby food show that the spectral reconstruction error of BabyNutri is only 5 . 91%, reducing 33% than the state-of-the-art baseline with the same time complexity. In addition, the nutrient estimation performance of BabyNutri not only obviously outperforms state-of-the-art and cost-effective solutions but also is highly correlated with the professional spectrometer, with the correlation coefficients of 0 . 81, 0 . 88, 0 . 82 for protein, fat, and carbohydrate, respectively. However the price of our system is only one percent of the commercial solution. We also validate that BabyNutri is robust regarding various factors, e . g ., ambient light, food volume, and even unseen baby food samples.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"7 1","pages":"15:1-15:30"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79717441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
WiMeasure: Millimeter-level Object Size Measurement with Commodity WiFi Devices WiMeasure:毫米级物体尺寸测量与商品WiFi设备
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3596250
Xuanzhi Wang, Kai Niu, Anlan Yu, Jie Xiong, Zhiyu Yao, Junzhe Wang, Wenwei Li
{"title":"WiMeasure: Millimeter-level Object Size Measurement with Commodity WiFi Devices","authors":"Xuanzhi Wang, Kai Niu, Anlan Yu, Jie Xiong, Zhiyu Yao, Junzhe Wang, Wenwei Li","doi":"10.1145/3596250","DOIUrl":"https://doi.org/10.1145/3596250","url":null,"abstract":"In the past few years, a large range of wireless signals such as WiFi, RFID, UWB and Millimeter Wave were utilized for sensing purposes. Among these wireless sensing modalities, WiFi sensing attracts a lot of attention owing to the pervasiveness of WiFi infrastructure in our surrounding environments. While WiFi sensing has achieved a great success in capturing the target’s motion information ranging from coarse-grained activities and gestures to fine-grained vital signs, it still has difficulties in precisely obtaining the target size owing to the low frequency and small bandwidth of WiFi signals. Even Millimeter Wave radar can only achieve a very coarse-grained size measurement. High precision object size sensing requires using RF signals in the extremely high-frequency band (e.g., Terahertz band). In this paper, we utilize low-frequency WiFi signals to achieve accurate object size measurement without requiring any learning or training. The key insight is that when an object moves between a pair of WiFi transceivers, the WiFi CSI variations contain singular points (i.e., singularities) and we observe an exciting opportunity of employing the number of singularities to measure the object size. In this work, we model the relationship between the object size and the number of singularities when an object moves near the LoS path, which lays the theoretical foundation for the proposed system to work. By addressing multiple challenges, for the first time, we make WiFi-based object size measurement work on commodity WiFi cards and achieve a surprisingly low median error of 2.6 mm. We believe this work is an important missing piece of WiFi sensing and opens the door to size measurement using low-cost low-frequency RF signals.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"13 1","pages":"79:1-79:26"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78362382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
LemurDx: Using Unconstrained Passive Sensing for an Objective Measurement of Hyperactivity in Children with no Parent Input LemurDx:使用无约束的被动感知来客观测量没有父母输入的儿童多动症
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3596244
Riku Arakawa, Karan Ahuja, K. Mak, Gwendolyn Thompson, Samy Shaaban, Oliver Lindhiem, Mayank Goel
{"title":"LemurDx: Using Unconstrained Passive Sensing for an Objective Measurement of Hyperactivity in Children with no Parent Input","authors":"Riku Arakawa, Karan Ahuja, K. Mak, Gwendolyn Thompson, Samy Shaaban, Oliver Lindhiem, Mayank Goel","doi":"10.1145/3596244","DOIUrl":"https://doi.org/10.1145/3596244","url":null,"abstract":"Hyperactivity is the most dominant presentation of Attention-Deficit/Hyperactivity Disorder in young children. Currently, measuring hyperactivity involves parents’ or teachers’ reports. These reports are vulnerable to subjectivity and can lead to misdiagnosis. LemurDx provides an objective measure of hyperactivity using passive mobile sensing. We collected data from 61 children (25 with hyperactivity) who wore a smartwatch for up to 7 days without changing their daily routine. The participants’ parents maintained a log of the child’s activities at a half-hour granularity ( e.g. , sitting, exercising) as contextual information. Our ML models achieved 85.2% accuracy in detecting hyperactivity in children (using parent-provided activity labels). We also built models that estimated children’s context from the sensor data and did not rely on activity labels to reduce parent burden. These models achieved 82.0% accuracy in detecting hyperactivity. In addition, we interviewed five clinicians who suggested a need for a tractable risk score that enables analysis of a child’s behavior across contexts. Our results show the feasibility of supporting the diagnosis of hyperactivity by providing clinicians with an interpretable and objective score of hyperactivity using off-the-shelf watches and adding no constraints to children or their guardians.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"41 1","pages":"46:1-46:23"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73839508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Midas: Generating mmWave Radar Data from Videos for Training Pervasive and Privacy-preserving Human Sensing Tasks 迈达斯:从视频中生成毫米波雷达数据,用于训练普遍和保护隐私的人类感知任务
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3580872
Kaikai Deng, Dong Zhao, Qiaoyue Han, Zihan Zhang, Shuyue Wang, Anfu Zhou, Huadong Ma
{"title":"Midas: Generating mmWave Radar Data from Videos for Training Pervasive and Privacy-preserving Human Sensing Tasks","authors":"Kaikai Deng, Dong Zhao, Qiaoyue Han, Zihan Zhang, Shuyue Wang, Anfu Zhou, Huadong Ma","doi":"10.1145/3580872","DOIUrl":"https://doi.org/10.1145/3580872","url":null,"abstract":"Millimeter wave radar is a promising sensing modality for enabling pervasive and privacy-preserving human sensing. However, the lack of large-scale radar datasets limits the potential of training deep learning models to achieve generalization and robustness. To close this gap, we resort to designing a software pipeline that leverages wealthy video repositories to generate synthetic radar data, but it confronts key challenges including i) multipath reflection and attenuation of radar signals among multiple humans, ii) unconvertible generated data leading to poor generality for various applications, and iii) the class-imbalance issue of videos leading to low model stability. To this end, we design Midas to generate realistic, convertible radar data from videos via two components: (i) a data generation network ( DG-Net ) combines several key modules, depth prediction , human mesh fitting and multi-human reflection model , to simulate the multipath reflection and attenuation of radar signals to output convertible coarse radar data, followed by a Transformer model to generate realistic radar data; (ii) a variant Siamese network ( VS-Net ) selects key video clips to eliminate data redundancy for addressing the class-imbalance issue. We implement and evaluate Midas with video data from various external data sources and real-world radar data, demonstrating its great advantages over the state-of-the-art approach for both activity recognition and object detection tasks. CCS Concepts: • Human-centered computing → Human computer interaction (HCI) ; • Computer systems organiza-tion → Architectures .","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"32 1","pages":"9:1-9:26"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82080506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
eat2pic: An Eating-Painting Interactive System to Nudge Users into Making Healthier Diet Choices eat2pic:一个饮食绘画互动系统,推动用户做出更健康的饮食选择
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3580784
Yugo Nakamura, Rei Nakaoka, Yuki Matsuda, K. Yasumoto
{"title":"eat2pic: An Eating-Painting Interactive System to Nudge Users into Making Healthier Diet Choices","authors":"Yugo Nakamura, Rei Nakaoka, Yuki Matsuda, K. Yasumoto","doi":"10.1145/3580784","DOIUrl":"https://doi.org/10.1145/3580784","url":null,"abstract":"Fig. 1. By transforming eating into a task of progressively coloring a landscape projected onto a screen, the eat2pic system encourages users to eat more slowly and maintain a healthy balanced diet. The eat2pic system is composed of a calm sensing component based on a sensor-equipped chopstick (A) and visual feedback components using two types of digital canvases (C, E). The colors of the foods consumed by the user are shown on one part of a landscape displayed on two digital canvases to illustrate a single meal and the food consumed in a week as digital paintings generated by an automated system. The one-meal eat2pic (B, C) guides a user’s behavior through a single meal with real-time feedback, whereas the one-week eat2pic (D, E) guides a user’s food choices and eating behaviors with longer-term feedback accumulated over a full week.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"38 1","pages":"24:1-24:23"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84543318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EarAcE: Empowering Versatile Acoustic Sensing via Earable Active Noise Cancellation Platform EarAcE:通过可耳式主动降噪平台增强多功能声学传感能力
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3596242
Yetong Cao, Chao Cai, A. Yu, Fan Li, Jun Luo
{"title":"EarAcE: Empowering Versatile Acoustic Sensing via Earable Active Noise Cancellation Platform","authors":"Yetong Cao, Chao Cai, A. Yu, Fan Li, Jun Luo","doi":"10.1145/3596242","DOIUrl":"https://doi.org/10.1145/3596242","url":null,"abstract":"In recent years, particular attention has been devoted to earable acoustic sensing due to its numerous applications. However, the lack of a common platform for accessing raw audio samples has forced researchers/developers to pay great efforts to the trifles of prototyping often irrelevant to the core sensing functions. Meanwhile, the growing popularity of active noise cancellation (ANC) has endowed common earphones with high standard acoustic capability yet to be explored by sensing. To this end, we propose EarA ce to be the first acoustic sensing platform exploiting the native acoustics of commercial ANC earphones, significantly improving upon self-crafted earphone sensing devices. EarA ce takes a compact design to handle hardware heterogeneity and to deliver flexible control on audio facilities. Leveraging a systematic study on in-ear acoustic signals, EarA ce gains abilities to combat performance sensitivity to device wearing states and to eliminate body motion interference. We further implement three major acoustic sensing applications to showcase the efficacy and adaptability of EarA ce ; the results evidently demonstrate EarA ce ’s promising future in facilitating earable acoustic sensing research.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"88 1","pages":"47:1-47:23"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81131025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ThermoFit: Thermoforming Smart Orthoses via Metamaterial Structures for Body-Fitting and Component-Adjusting 热贴合:通过超材料结构热成型智能矫形器,用于身体贴合和部件调整
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3580806
Guanyun Wang, Yue Yang, Mengyan Guo, Kuang-ji Zhu, Zihan Yan, Qiang Cui, Zihong Zhou, Junzhe Ji, Jiaji Li, Danli Luo, Deying Pan, Yitao Fan, Teng Han, Ye Tao, Lingyun Sun
{"title":"ThermoFit: Thermoforming Smart Orthoses via Metamaterial Structures for Body-Fitting and Component-Adjusting","authors":"Guanyun Wang, Yue Yang, Mengyan Guo, Kuang-ji Zhu, Zihan Yan, Qiang Cui, Zihong Zhou, Junzhe Ji, Jiaji Li, Danli Luo, Deying Pan, Yitao Fan, Teng Han, Ye Tao, Lingyun Sun","doi":"10.1145/3580806","DOIUrl":"https://doi.org/10.1145/3580806","url":null,"abstract":"Smart orthoses hold great potential for intelligent rehabilitation monitoring and training. However, most of these electronic assistive devices are typically too difficult for daily use and challenging to modify to accommodate variations in body shape and medical needs. For existing clinicians, the customization pipeline of these smart devices imposes significant learning costs. This paper introduces ThermoFit, an end-to-end design and fabrication pipeline for thermoforming smart orthoses that adheres to the clinically accepted procedure. ThermoFit enables the shapes and electronics positions of smart orthoses to conform to bodies and allows rapid iteration by integrating low-cost Low-Temperature Thermoplastics (LTTPs) with custom metamaterial structures and electronic components. Specifically, three types of metamaterial structures are used in LTTPs to reduce the wrinkles caused by the thermoforming process and to permit component position adjustment and joint movement. A design tool prototype aids in generating metamaterial patterns and optimizing component placement and circuit routing. Three applications show that ThermoFit can be shaped on bodies to different wearables. Finally, a hands-on study with a clinician verifies the user-friendliness of thermoforming smart orthosis, and technical evaluations demonstrate fabrication efficiency and electronic continuity.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"66 1","pages":"31:1-31:27"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83918981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FeverPhone: Accessible Core-Body Temperature Sensing for Fever Monitoring Using Commodity Smartphones 发烧手机:可访问的核心体温传感发烧监测使用商品智能手机
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3580850
Joseph Breda, Mastafa Springston, A. Mariakakis, Shwetak N. Patel
{"title":"FeverPhone: Accessible Core-Body Temperature Sensing for Fever Monitoring Using Commodity Smartphones","authors":"Joseph Breda, Mastafa Springston, A. Mariakakis, Shwetak N. Patel","doi":"10.1145/3580850","DOIUrl":"https://doi.org/10.1145/3580850","url":null,"abstract":"","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"457 ","pages":"3:1-3:23"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91550863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VoiceListener: A Training-free and Universal Eavesdropping Attack on Built-in Speakers of Mobile Devices VoiceListener:对移动设备内置扬声器的免培训和通用窃听攻击
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2023-01-01 DOI: 10.1145/3580789
Lei Wang, Meng Chen, Lu Li, Feng Lin, Kui Ren, Lei Wang, Meng Chen, Liwang Lu, Zhongjie Ba, Feng Lin
{"title":"VoiceListener: A Training-free and Universal Eavesdropping Attack on Built-in Speakers of Mobile Devices","authors":"Lei Wang, Meng Chen, Lu Li, Feng Lin, Kui Ren, Lei Wang, Meng Chen, Liwang Lu, Zhongjie Ba, Feng Lin","doi":"10.1145/3580789","DOIUrl":"https://doi.org/10.1145/3580789","url":null,"abstract":"Recently, voice leakage gradually raises more significant concerns of users, due to its underlying sensitive and private information when providing intelligent services. Existing studies demonstrate the feasibility of applying learning-based solutions on built-in sensor measurements to recover voices. However, due to the privacy concerns, large-scale voices-sensor measurements samples for model training are not publicly available, leading to significant efforts in data collection for such an attack. In this paper, we propose a training-free and universal eavesdropping attack on built-in speakers, VoiceListener , which releases the data collection efforts and is able to adapt to various voices, platforms, and domains. In particular, VoiceListener develops an aliasing-corrected super resolution mechanism, including an aliasing-based pitch estimation and an aliasing-corrected voice recovering, to convert the undersampled narrow-band sensor measurements to wide-band voices. Extensive experiments demonstrate that our proposed VoiceListener could accurately recover the voices from undersampled sensor measurements and is robust to different voices, platforms and domains, realizing the universal eavesdropping attack.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"11 1","pages":"32:1-32:22"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78965604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信