Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

筛选
英文 中文
HyperHAR: Inter-sensing Device Bilateral Correlations and Hyper-correlations Learning Approach for Wearable Sensing Device Based Human Activity Recognition HyperHAR:基于可穿戴传感设备的人类活动识别的传感设备间双边相关性和超相关性学习方法
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2024-03-06 DOI: 10.1145/3643511
Nafees Ahmad, Ho-fung Leung
{"title":"HyperHAR: Inter-sensing Device Bilateral Correlations and Hyper-correlations Learning Approach for Wearable Sensing Device Based Human Activity Recognition","authors":"Nafees Ahmad, Ho-fung Leung","doi":"10.1145/3643511","DOIUrl":"https://doi.org/10.1145/3643511","url":null,"abstract":"Human activity recognition (HAR) has emerged as a prominent research field in recent years. Current HAR models are only able to model bilateral correlations between two sensing devices for feature extraction. However, for some activities, exploiting correlations among more than two sensing devices, which we call hyper-correlations in this paper, is essential for extracting discriminatory features. In this work, we propose a novel HyperHAR framework that automatically models both bilateral and hyper-correlations among sensing devices. The HyperHAR consists of three modules. The Intra-sensing Device Feature Extraction Module generates latent representation across the data of each sensing device, based on which the Inter-sensing Device Multi-order Correlations Learning Module simultaneously learns both bilateral correlations and hyper-correlations. Lastly, the Information Aggregation Module generates a representation for an individual sensing device by aggregating the bilateral correlations and hyper-correlations it involves in. It also generates the representation for a pair of sensing devices by aggregating the hyper-correlations between the pair and other different individual sensing devices. We also propose a computationally more efficient HyperHAR-Lite framework, a lightweight variant of the HyperHAR framework, at a small cost of accuracy. Both the HyperHAR and HyperHAR-Lite outperform SOTA models across three commonly used benchmark datasets with significant margins. We validate the efficiency and effectiveness of the proposed frameworks through an ablation study and quantitative and qualitative analysis.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"39 11","pages":"1:1-1:29"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140261817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Preserving and Cross-Domain Human Sensing by Federated Domain Adaptation with Semantic Knowledge Correction 通过具有语义知识校正功能的联合域适应技术实现隐私保护和跨域人体感应
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2024-03-06 DOI: 10.1145/3643503
Kaijie Gong, Yi Gao, Wei Dong
{"title":"Privacy-Preserving and Cross-Domain Human Sensing by Federated Domain Adaptation with Semantic Knowledge Correction","authors":"Kaijie Gong, Yi Gao, Wei Dong","doi":"10.1145/3643503","DOIUrl":"https://doi.org/10.1145/3643503","url":null,"abstract":"Federated Learning (FL) enables distributed training of human sensing models in a privacy-preserving manner. While promising, federated global models suffer from cross-domain accuracy degradation when the labeled source domains statistically differ from the unlabeled target domain. To tackle this problem, recent methods perform pairwise computation on the source and target domains to minimize the domain discrepancy by adversarial strategy. However, these methods are limited by the fact that pairwise source-target adversarial alignment alone only achieves domain-level alignment, which entails the alignment of domain-invariant as well as environment-dependent features. The misalignment of environment-dependent features may cause negative impact on the performance of the federated global model. In this paper, we introduce FDAS, a Federated adversarial Domain Adaptation with Semantic Knowledge Correction method. FDAS achieves concurrent alignment at both domain and semantic levels to improve the semantic quality of the aligned features, thereby reducing the misalignment of environment-dependent features. Moreover, we design a cross-domain semantic similarity metric and further devise feature selection and feature refinement mechanisms to enhance the two-level alignment. In addition, we propose a similarity-aware model fine-tuning strategy to further improve the target model performance. We evaluate the performance of FDAS extensively on four public and a real-world human sensing datasets. Extensive experiments demonstrate the superior effectiveness of FDAS and its potential in the real-world ubiquitous computing scenarios.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"31 2","pages":"6:1-6:26"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UFace: Your Smartphone Can "Hear" Your Facial Expression! UFace:智能手机能 "听 "到你的面部表情
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2024-03-06 DOI: 10.1145/3643546
Shuning Wang, Linghui Zhong, Yongjian Fu, Lili Chen, Ju Ren, Yaoxue Zhang
{"title":"UFace: Your Smartphone Can \"Hear\" Your Facial Expression!","authors":"Shuning Wang, Linghui Zhong, Yongjian Fu, Lili Chen, Ju Ren, Yaoxue Zhang","doi":"10.1145/3643546","DOIUrl":"https://doi.org/10.1145/3643546","url":null,"abstract":"Facial expression recognition (FER) is a crucial task for human-computer interaction and a multitude of multimedia applications that typically call for friendly, unobtrusive, ubiquitous, and even long-term monitoring. Achieving such a FER system meeting these multi-requirements faces critical challenges, mainly including the tiny irregular non-periodic deformation of emotion movements, high variability in facial positions and severe self-interference caused by users' own other behavior. In this work, we present UFace, a long-term, unobtrusive and reliable FER system for daily life using acoustic signals generated by a portable smartphone. We design an innovative network model with dual-stream input based on the attention mechanism, which can leverage distance-time profile features from various viewpoints to extract fine-grained emotion-related signal changes, thus enabling accurate identification of many kinds of expressions. Meanwhile, we propose effective mechanisms to deal with a series of interference issues during actual use. We implement UFace prototype with a daily-used smartphone and conduct extensive experiments in various real-world environments. The results demonstrate that UFace can successfully recognize 7 typical facial expressions with an average accuracy of 87.8% across 20 participants. Besides, the evaluation of different distances, angles, and interferences proves the great potential of the proposed system to be employed in practical scenarios.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"32 20","pages":"22:1-22:27"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UHead: Driver Attention Monitoring System Using UWB Radar UHead: 使用 UWB 雷达的驾驶员注意力监测系统
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2024-03-06 DOI: 10.1145/3643551
Chongzhi Xu, Xiaolong Zheng, Z. Ren, Liang Liu, Huadong Ma
{"title":"UHead: Driver Attention Monitoring System Using UWB Radar","authors":"Chongzhi Xu, Xiaolong Zheng, Z. Ren, Liang Liu, Huadong Ma","doi":"10.1145/3643551","DOIUrl":"https://doi.org/10.1145/3643551","url":null,"abstract":"The focus of Advanced driver-assistance systems (ADAS) is extending from the vehicle and road conditions to the driver because the driver's attention is critical to driving safety. Although existing sensor and camera based methods can monitor driver attention, they rely on specialised hardware and environmental conditions. In this paper, we aim to develop an effective and easy-to-use driver attention monitoring system based on UWB radar. We exploit the strong association between head motions and driver attention and propose UHead that infers driver attention by monitoring the direction and angle of the driver's head rotation. The core idea is to extract rotational time-frequency representation from reflected signals and to estimate head rotation angles from complex head reflections. To eliminate the dynamic noise generated by other body parts, UHead leverages the large magnitude and high velocity of head rotation to extract head motion information from the dynamically coupled information. UHead uses a bilinear joint time-frequency representation to avoid the loss of time and frequency resolution caused by windowing of traditional methods. We also design a head structure-based rotation angle estimation algorithm to accurately estimate the rotation angle from the time-varying rotation information of multiple reflection points in the head. Experimental results show that we achieve 12.96° median error of 3D head rotation angle estimation in real vehicle scenes.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"1 3","pages":"25:1-25:28"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140260960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeltaLCA: Comparative Life-Cycle Assessment for Electronics Design DeltaLCA:电子设计生命周期比较评估
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2024-03-06 DOI: 10.1145/3643561
Zhihang Zhang, Felix Hähnlein, Yuxuan Mei, Zachary Englhardt, Shwetak Patel, Adriana Schulz, Vikram Iyer
{"title":"DeltaLCA: Comparative Life-Cycle Assessment for Electronics Design","authors":"Zhihang Zhang, Felix Hähnlein, Yuxuan Mei, Zachary Englhardt, Shwetak Patel, Adriana Schulz, Vikram Iyer","doi":"10.1145/3643561","DOIUrl":"https://doi.org/10.1145/3643561","url":null,"abstract":"Reducing the environmental footprint of electronics and computing devices requires new tools that empower designers to make informed decisions about sustainability during the design process itself. This is not possible with current tools for life cycle assessment (LCA) which require substantial domain expertise and time to evaluate the numerous chips and other components that make up a device. We observe first that informed decision-making does not require absolute metrics and can instead be done by comparing designs. Second, we can use domain-specific heuristics to perform these comparisons. We combine these insights to develop DeltaLCA, an open-source interactive design tool that addresses the dual challenges of automating life cycle inventory generation and data availability by performing comparative analyses of electronics designs. Users can upload standard design files from Electronic Design Automation (EDA) software and the tool will guide them through determining which one has greater carbon footprints. DeltaLCA leverages electronics-specific LCA datasets and heuristics and tries to automatically rank the two designs, prompting users to provide additional information only when necessary. We show through case studies DeltaLCA achieves the same result as evaluating full LCAs, and that it accelerates LCA comparisons from eight expert-hours to a single click for devices with ~30 components, and 15 minutes for more complex devices with ~100 components.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"95 3","pages":"29:1-29:29"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140261211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Daily-Life Logging in Free-living Environment Using Non-Visual Egocentric Sensors on a Smartphone 在自由生活环境中使用智能手机上的非视觉自我中心传感器进行多模态日常生活记录
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2024-03-06 DOI: 10.1145/3643553
Ke Sun, Chunyu Xia, Xinyu Zhang, Hao Chen, C. Zhang
{"title":"Multimodal Daily-Life Logging in Free-living Environment Using Non-Visual Egocentric Sensors on a Smartphone","authors":"Ke Sun, Chunyu Xia, Xinyu Zhang, Hao Chen, C. Zhang","doi":"10.1145/3643553","DOIUrl":"https://doi.org/10.1145/3643553","url":null,"abstract":"Egocentric non-intrusive sensing of human activities of daily living (ADL) in free-living environments represents a holy grail in ubiquitous computing. Existing approaches, such as egocentric vision and wearable motion sensors, either can be intrusive or have limitations in capturing non-ambulatory actions. To address these challenges, we propose EgoADL, the first egocentric ADL sensing system that uses an in-pocket smartphone as a multi-modal sensor hub to capture body motion, interactions with the physical environment and daily objects using non-visual sensors (audio, wireless sensing, and motion sensors). We collected a 120-hour multimodal dataset and annotated 20-hour data into 221 ADL, 70 object interactions, and 91 actions. EgoADL proposes multi-modal frame-wise slow-fast encoders to learn the feature representation of multi-sensory data that characterizes the complementary advantages of different modalities and adapt a transformer-based sequence-to-sequence model to decode the time-series sensor signals into a sequence of words that represent ADL. In addition, we introduce a self-supervised learning framework that extracts intrinsic supervisory signals from the multi-modal sensing data to overcome the lack of labeling data and achieve better generalization and extensibility. Our experiments in free-living environments demonstrate that EgoADL can achieve comparable performance with video-based approaches, bringing the vision of ambient intelligence closer to reality.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"70 5","pages":"17:1-17:32"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140261276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiquImager: Fine-grained Liquid Identification and Container Imaging System with COTS WiFi Devices LiquImager:使用 COTS WiFi 设备的精细液体识别和容器成像系统
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2024-03-06 DOI: 10.1145/3643509
Fei Shang, Panlong Yang, Dawei Yan, Sijia Zhang, Xiang-Yang Li
{"title":"LiquImager: Fine-grained Liquid Identification and Container Imaging System with COTS WiFi Devices","authors":"Fei Shang, Panlong Yang, Dawei Yan, Sijia Zhang, Xiang-Yang Li","doi":"10.1145/3643509","DOIUrl":"https://doi.org/10.1145/3643509","url":null,"abstract":"WiFi has gradually developed into one of the main candidate technologies for ubiquitous sensing. Based on commercial off-the-shelf (COTS) WiFi devices, this paper proposes LiquImager, which can simultaneously identify liquid and image container regardless of container shape and position. Since the container size is close to the wavelength, diffraction makes the effect of the liquid on the signal difficult to approximate with a simple geometric model (such as ray tracking). Based on Maxwell's equations, we construct an electric field scattering sensing model. Using few measurements provided by COTS WiFi devices, we solve the scattering model to obtain the medium distribution of the sensing domain, which is used for identifing and imaging liquids. To suppress the signal noise, we propose LiqU-Net for image enhancement. For the centimeter-scale container that is randomly placed in an area of 25 cm × 25 cm, LiquImager can identify the liquid more than 90% accuracy. In terms of container imaging, LiquImager can accurately find the edge of the container for 4 types of containers with a volume less than 500 ml.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"29 11","pages":"15:1-15:29"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpeciFingers: Finger Identification and Error Correction on Capacitive Touchscreens SpeciFingers:电容式触摸屏上的手指识别和纠错
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2024-03-06 DOI: 10.1145/3643559
Zeyuan Huang, Cangjun Gao, Haiyan Wang, Xiaoming Deng, Yu-Kun Lai, Cuixia Ma, Sheng-feng Qin, Yong-Jin Liu, Hongan Wang
{"title":"SpeciFingers: Finger Identification and Error Correction on Capacitive Touchscreens","authors":"Zeyuan Huang, Cangjun Gao, Haiyan Wang, Xiaoming Deng, Yu-Kun Lai, Cuixia Ma, Sheng-feng Qin, Yong-Jin Liu, Hongan Wang","doi":"10.1145/3643559","DOIUrl":"https://doi.org/10.1145/3643559","url":null,"abstract":"The inadequate use of finger properties has limited the input space of touch interaction. By leveraging the category of contacting fingers, finger-specific interaction is able to expand input vocabulary. However, accurate finger identification remains challenging, as it requires either additional sensors or limited sets of identifiable fingers to achieve ideal accuracy in previous works. We introduce SpeciFingers, a novel approach to identify fingers with the capacitive raw data on touchscreens. We apply a neural network of an encoder-decoder architecture, which captures the spatio-temporal features in capacitive image sequences. To assist users in recovering from misidentification, we propose a correction mechanism to replace the existing undo-redo process. Also, we present a design space of finger-specific interaction with example interaction techniques. In particular, we designed and implemented a use case of optimizing the performance in pointing on small targets. We evaluated our identification model and error correction mechanism in our use case.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"17 4","pages":"8:1-8:28"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Subject 3D Human Mesh Construction Using Commodity WiFi 利用商品 WiFi 构建多主体 3D 人体网格
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2024-03-06 DOI: 10.1145/3643504
Yichao Wang, Yili Ren, Jie Yang
{"title":"Multi-Subject 3D Human Mesh Construction Using Commodity WiFi","authors":"Yichao Wang, Yili Ren, Jie Yang","doi":"10.1145/3643504","DOIUrl":"https://doi.org/10.1145/3643504","url":null,"abstract":"This paper introduces MultiMesh, a multi-subject 3D human mesh construction system based on commodity WiFi. Our system can reuse commodity WiFi devices in the environment and is capable of working in non-line-of-sight (NLoS) conditions compared with the traditional computer vision-based approach. Specifically, we leverage an L-shaped antenna array to generate the two-dimensional angle of arrival (2D AoA) of reflected signals for subject separation in the physical space. We further leverage the angle of departure and time of flight of the signal to enhance the resolvability for precise separation of close subjects. Then we exploit information from various signal dimensions to mitigate the interference of indirect reflections according to different signal propagation paths. Moreover, we employ the continuity of human movement in the spatial-temporal domain to track weak reflected signals of faraway subjects. Finally, we utilize a deep learning model to digitize 2D AoA images of each subject into the 3D human mesh. We conducted extensive experiments in real-world multi-subject scenarios under various environments to evaluate the performance of our system. For example, we conduct experiments with occlusion and perform human mesh construction for different distances between two subjects and different distances between subjects and WiFi devices. The results show that MultiMesh can accurately construct 3D human meshes for multiple users with an average vertex error of 4cm. The evaluations also demonstrate that our system could achieve comparable performance for unseen environments and people. Moreover, we also evaluate the accuracy of spatial information extraction and the performance of subject detection. These evaluations demonstrate the robustness and effectiveness of our system.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"14 3","pages":"23:1-23:25"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140260952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IOTeeth: Intra-Oral Teeth Sensing System for Dental Occlusal Diseases Recognition IOTeeth:用于识别牙齿咬合疾病的口腔内牙齿传感系统
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2024-03-06 DOI: 10.1145/3643516
Zhizhang Hu, Amir Radmehr, Yue Zhang, Shijia Pan, Phuc Nguyen
{"title":"IOTeeth: Intra-Oral Teeth Sensing System for Dental Occlusal Diseases Recognition","authors":"Zhizhang Hu, Amir Radmehr, Yue Zhang, Shijia Pan, Phuc Nguyen","doi":"10.1145/3643516","DOIUrl":"https://doi.org/10.1145/3643516","url":null,"abstract":"While occlusal diseases - the main cause of tooth loss -- significantly impact patients' teeth and well-being, they are the most underdiagnosed dental diseases nowadays. Experiencing occlusal diseases could result in difficulties in eating, speaking, and chronicle headaches, ultimately impacting patients' quality of life. Although attempts have been made to develop sensing systems for teeth activity monitoring, solutions that support sufficient sensing resolution for occlusal monitoring are missing. To fill that gap, this paper presents IOTeeth, a cost-effective and automated intra-oral sensing system for continuous and fine-grained monitoring of occlusal diseases. The IOTeeth system includes an intra-oral piezoelectric-based sensing array integrated into a dental retainer platform to support reliable occlusal disease recognition. IOTeeth focuses on biting and grinding activities from the canines and front teeth, which contain essential information of occlusion. IOTeeth's intra-oral wearable collects signals from the sensors and fetches them into a lightweight and robust deep learning model called Physioaware Attention Network (PAN Net) for occlusal disease recognition. We evaluate IOTeeth with 12 articulator teeth models from dental clinic patients. Evaluation results show an F1 score of 0.97 for activity recognition with leave-one-out validation and an average F1 score of 0.92 for dental disease recognition for different activities with leave-one-out validation.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"23 4","pages":"7:1-7:29"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信