Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

筛选
英文 中文
Ask Less, Learn More: Adapting Ecological Momentary Assessment Survey Length by Modeling Question-Answer Information Gain. 少问多学:通过模拟问答信息增益来调整生态瞬时评估调查长度。
IF 3.6
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Pub Date : 2024-11-01 Epub Date: 2024-11-21 DOI: 10.1145/3699735
Jixin Li, Aditya Ponnada, Wei-Lin Wang, Genevieve F Dunton, Stephen S Intille
{"title":"Ask Less, Learn More: Adapting Ecological Momentary Assessment Survey Length by Modeling Question-Answer Information Gain.","authors":"Jixin Li, Aditya Ponnada, Wei-Lin Wang, Genevieve F Dunton, Stephen S Intille","doi":"10.1145/3699735","DOIUrl":"10.1145/3699735","url":null,"abstract":"<p><p>Ecological momentary assessment (EMA) is an approach to collect self-reported data repeatedly on mobile devices in natural settings. EMAs allow for temporally dense, ecologically valid data collection, but frequent interruptions with lengthy surveys on mobile devices can burden users, impacting compliance and data quality. We propose a method that reduces the length of each EMA question set measuring interrelated constructs, with only modest information loss. By estimating the potential information gain of each EMA question using question-answer prediction models, this method can prioritize the presentation of the most informative question in a question-by-question sequence and skip uninformative questions. We evaluated the proposed method by simulating question omission using four real-world datasets from three different EMA studies. When compared against the random question omission approach that skips 50% of the questions, our method reduces imputation errors by 15%-52%. In surveys with five answer options for each question, our method can reduce the mean survey length by 34%-56% with a real-time prediction accuracy of 72%-95% for the skipped questions. The proposed method may either allow more constructs to be surveyed without adding user burden or reduce response burden for more sustainable longitudinal EMA data collection.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 4","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633767/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MindScape Study: Integrating LLM and Behavioral Sensing for Personalized AI-Driven Journaling Experiences. MindScape研究:整合法学硕士和行为感知的个性化人工智能驱动的日志体验。
IF 3.6
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Pub Date : 2024-11-01 Epub Date: 2024-11-21 DOI: 10.1145/3699761
Subigya Nepal, Arvind Pillai, William Campbell, Talie Massachi, Michael V Heinz, Ashmita Kunwar, Eunsol Soul Choi, Xuhai Xu, Joanna Kuc, Jeremy F Huckins, Jason Holden, Sarah M Preum, Colin Depp, Nicholas Jacobson, Mary P Czerwinski, Eric Granholm, Andrew T Campbell
{"title":"MindScape Study: Integrating LLM and Behavioral Sensing for Personalized AI-Driven Journaling Experiences.","authors":"Subigya Nepal, Arvind Pillai, William Campbell, Talie Massachi, Michael V Heinz, Ashmita Kunwar, Eunsol Soul Choi, Xuhai Xu, Joanna Kuc, Jeremy F Huckins, Jason Holden, Sarah M Preum, Colin Depp, Nicholas Jacobson, Mary P Czerwinski, Eric Granholm, Andrew T Campbell","doi":"10.1145/3699761","DOIUrl":"10.1145/3699761","url":null,"abstract":"<p><p>Mental health concerns are prevalent among college students, highlighting the need for effective interventions that promote self-awareness and holistic well-being. MindScape explores a novel approach to AI-powered journaling by integrating passively collected behavioral patterns such as conversational engagement, sleep, and location with Large Language Models (LLMs). This integration creates a highly personalized and context-aware journaling experience, enhancing self-awareness and well-being by embedding behavioral intelligence into AI. We present an 8-week exploratory study with 20 college students, demonstrating the MindScape app's efficacy in enhancing positive affect (7%), reducing negative affect (11%), loneliness (6%), and anxiety and depression, with a significant week-over-week decrease in PHQ-4 scores (-0.25 coefficient). The study highlights the advantages of contextual AI journaling, with participants particularly appreciating the tailored prompts and insights provided by the MindScape app. Our analysis also includes a comparison of responses to AI-driven contextual versus generic prompts, participant feedback insights, and proposed strategies for leveraging contextual AI journaling to improve well-being on college campuses. By showcasing the potential of contextual AI journaling to support mental health, we provide a foundation for further investigation into the effects of contextual AI journaling on mental health and well-being.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 4","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11634059/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Detection: Towards Actionable Sensing Research in Clinical Mental Healthcare. 超越检测:走向临床心理保健的可操作传感研究。
IF 3.6
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Pub Date : 2024-11-01 Epub Date: 2024-11-21 DOI: 10.1145/3699755
Daniel A Adler, Yuewen Yang, Thalia Viranda, Xuhai Xu, David C Mohr, Anna R VAN Meter, Julia C Tartaglia, Nicholas C Jacobson, Fei Wang, Deborah Estrin, Tanzeem Choudhury
{"title":"Beyond Detection: Towards Actionable Sensing Research in Clinical Mental Healthcare.","authors":"Daniel A Adler, Yuewen Yang, Thalia Viranda, Xuhai Xu, David C Mohr, Anna R VAN Meter, Julia C Tartaglia, Nicholas C Jacobson, Fei Wang, Deborah Estrin, Tanzeem Choudhury","doi":"10.1145/3699755","DOIUrl":"10.1145/3699755","url":null,"abstract":"<p><p>Researchers in ubiquitous computing have long promised that passive sensing will revolutionize mental health measurement by detecting individuals in a population experiencing a mental health disorder or specific symptoms. Recent work suggests that detection tools do not generalize well when trained and tested in more heterogeneous samples. In this work, we contribute a narrative review and findings from two studies with 41 mental health clinicians to understand these generalization challenges. Our findings motivate research on actionable sensing, as an alternative to detection research, studying how passive sensing can augment traditional mental health measures to support actions in clinical care. Specifically, we identify how passive sensing can support clinical actions by revealing patients' presenting problems for treatment and identifying targets for behavior change and symptom reduction, but passive data requires additional contextual information to be appropriately interpreted and used in care. We conclude by suggesting research at the intersection of actionable sensing and mental healthcare, to align technical research in ubiquitous computing with clinical actions and needs.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 4","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11620792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Representation Learning and Temporal-Spectral Feature Fusion for Bed Occupancy Detection.
IF 3.6
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Pub Date : 2024-09-01 Epub Date: 2024-09-09 DOI: 10.1145/3678514
Yingjian Song, Zaid Farooq Pitafi, Fei Dou, Jin Sun, Xiang Zhang, Bradley G Phillips, Wenzhan Song
{"title":"Self-Supervised Representation Learning and Temporal-Spectral Feature Fusion for Bed Occupancy Detection.","authors":"Yingjian Song, Zaid Farooq Pitafi, Fei Dou, Jin Sun, Xiang Zhang, Bradley G Phillips, Wenzhan Song","doi":"10.1145/3678514","DOIUrl":"10.1145/3678514","url":null,"abstract":"<p><p>In automated sleep monitoring systems, bed occupancy detection is the foundation or the first step before other downstream tasks, such as inferring sleep activities and vital signs. The existing methods do not generalize well to real-world environments due to single environment settings and rely on threshold-based approaches. Manually selecting thresholds requires observing a large amount of data and may not yield optimal results. In contrast, acquiring extensive labeled sensory data poses significant challenges regarding cost and time. Hence, developing models capable of generalizing across diverse environments with limited data is imperative. This paper introduces SeismoDot, which consists of a self-supervised learning module and a spectral-temporal feature fusion module for bed occupancy detection. Unlike conventional methods that require separate pre-training and fine-tuning, our self-supervised learning module is co-optimized with the primary target task, which directs learned representations toward a task-relevant embedding space while expanding the feature space. The proposed feature fusion module enables the simultaneous exploitation of temporal and spectral features, enhancing the diversity of information from both domains. By combining these techniques, SeismoDot expands the diversity of embedding space for both the temporal and spectral domains to enhance its generalizability across different environments. SeismoDot not only achieves high accuracy (98.49%) and F1 scores (98.08%) across 13 diverse environments, but it also maintains high performance (97.01% accuracy and 96.54% F1 score) even when trained with just 20% (4 days) of the total data. This demonstrates its exceptional ability to generalize across various environmental settings, even with limited data availability.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 3","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906163/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143650046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HabitSense: A Privacy-Aware, AI-Enhanced Multimodal Wearable Platform for mHealth Applications.
IF 3.6
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Pub Date : 2024-09-01 Epub Date: 2024-09-09 DOI: 10.1145/3678591
Glenn J Fernandes, Jiayi Zheng, Mahdi Pedram, Christopher Romano, Farzad Shahabi, Blaine Rothrock, Thomas Cohen, Helen Zhu, Tanmeet S Butani, Josiah Hester, Aggelos K Katsaggelos, Nabil Alshurafa
{"title":"HabitSense: A Privacy-Aware, AI-Enhanced Multimodal Wearable Platform for mHealth Applications.","authors":"Glenn J Fernandes, Jiayi Zheng, Mahdi Pedram, Christopher Romano, Farzad Shahabi, Blaine Rothrock, Thomas Cohen, Helen Zhu, Tanmeet S Butani, Josiah Hester, Aggelos K Katsaggelos, Nabil Alshurafa","doi":"10.1145/3678591","DOIUrl":"10.1145/3678591","url":null,"abstract":"<p><p>Wearable cameras provide an objective method to visually confirm and automate the detection of health-risk behaviors such as smoking and overeating, which is critical for developing and testing adaptive treatment interventions. Despite the potential of wearable camera systems, adoption is hindered by inadequate clinician input in the design, user privacy concerns, and user burden. To address these barriers, we introduced HabitSense, an open-source, multi-modal neck-worn platform developed with input from focus groups with clinicians (N=36) and user feedback from in-wild studies involving 105 participants over 35 days. Optimized for monitoring health-risk behaviors, the platform utilizes RGB, thermal, and inertial measurement unit sensors to detect eating and smoking events in real time. In a 7-day study involving 15 participants, HabitSense recorded 768 hours of footage, capturing 420.91 minutes of hand-to-mouth gestures associated with eating and smoking data crucial for training machine learning models, achieving a 92% F1-score in gesture recognition. To address privacy concerns, the platform records only during likely health-risk behavior events using SECURE, a smart activation algorithm. Additionally, HabitSense employs on-device obfuscation algorithms that selectively obfuscate the background during recording, maintaining individual privacy while leaving gestures related to health-risk behaviors unobfuscated. Our implementation of SECURE has resulted in a 48% reduction in storage needs and a 30% increase in battery life. This paper highlights the critical roles of clinician feedback, extensive field testing, and privacy-enhancing algorithms in developing an unobtrusive, lightweight, and reproducible wearable system that is both feasible and acceptable for monitoring health-risk behaviors in real-world settings.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 3","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11879279/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143557798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data.
IF 3.6
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Pub Date : 2024-03-01 Epub Date: 2024-03-06 DOI: 10.1145/3643540
Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, Anind K Dey, Dakuo Wang
{"title":"Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data.","authors":"Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, Anind K Dey, Dakuo Wang","doi":"10.1145/3643540","DOIUrl":"10.1145/3643540","url":null,"abstract":"<p><p>Advances in large language models (LLMs) have empowered a variety of applications. However, there is still a significant gap in research when it comes to understanding and enhancing the capabilities of LLMs in the field of mental health. In this work, we present a comprehensive evaluation of multiple LLMs on various mental health prediction tasks via online text data, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4. We conduct a broad range of experiments, covering zero-shot prompting, few-shot prompting, and instruction fine-tuning. The results indicate a promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5, outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9% on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%. They further perform on par with the state-of-the-art task-specific language model. We also conduct an exploratory case study on LLMs' capability on mental health reasoning tasks, illustrating the promising capability of certain models such as GPT-4. We summarize our findings into a set of action guidelines for potential methods to enhance LLMs' capability for mental health tasks. Meanwhile, we also emphasize the important limitations before achieving deployability in real-world mental health settings, such as known racial and gender bias. We highlight the important ethical risks accompanying this line of research.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11806945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143383272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Orientation-Aware 3D SLAM in Alternating Magnetic Field from Powerlines 电力线交变磁场中的方向感知 3D SLAM
Rongrong Wang, Rui Tan, Zhenyu Yan, Chris Xiaoxuan Lu
{"title":"Orientation-Aware 3D SLAM in Alternating Magnetic Field from Powerlines","authors":"Rongrong Wang, Rui Tan, Zhenyu Yan, Chris Xiaoxuan Lu","doi":"10.1145/3631446","DOIUrl":"https://doi.org/10.1145/3631446","url":null,"abstract":"Identifying new sensing modalities for indoor localization is an interest of research. This paper studies powerline-induced alternating magnetic field (AMF) that fills the indoor space for the orientation-aware three-dimensional (3D) simultaneous localization and mapping (SLAM). While an existing study has adopted a uniaxial AMF sensor for SLAM in a plane surface, the design falls short of addressing the vector field nature of AMF and is therefore susceptible to sensor orientation variations. Moreover, although the higher spatial variability of AMF in comparison with indoor geomagnetism promotes location sensing resolution, extra SLAM algorithm designs are needed to achieve robustness to trajectory deviations from the constructed map. To address the above issues, we design a new triaxial AMF sensor and a new SLAM algorithm that constructs a 3D AMF intensity map regularized and augmented by a Gaussian process. The triaxial sensor's orientation estimation is free of the error accumulation problem faced by inertial sensing. From extensive evaluation in eight indoor environments, our AMF-based 3D SLAM achieves sub-1m to 3m median localization errors in spaces of up to 500 m2, sub-2° mean error in orientation sensing, and outperforms the SLAM systems based on Wi-Fi, geomagnetism, and uniaxial AMF by more than 30%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"15 8","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HapticPilot 触觉领航
Youjin Sung, Rachel Kim, Kun Woo Song, Yitian Shao, Sang Ho Yoon
{"title":"HapticPilot","authors":"Youjin Sung, Rachel Kim, Kun Woo Song, Yitian Shao, Sang Ho Yoon","doi":"10.1145/3631453","DOIUrl":"https://doi.org/10.1145/3631453","url":null,"abstract":"The emergence of vibrotactile feedback in hand wearables enables immersive virtual reality (VR) experience with whole-hand haptic rendering. However, existing haptic rendering neglects inconsistent sensations caused by hand postures. In our study, we observed that changing hand postures alters the distribution of vibrotactile signals which might degrade one's haptic perception. To address the issues, we present HapticPilot which allows an in-situ haptic experience design for hand wearables in VR. We developed an in-situ authoring system supporting instant haptic design. In the authoring tool, we applied our posture-adaptive haptic rendering algorithm with a novel haptic design abstraction called phantom grid. The algorithm adapts phantom grid to the target posture and incorporates 1D & 2D phantom sensation with a unique actuator arrangement to provide a whole-hand experience. With this method, HapticPilot provides a consistent haptic experience across various hand postures is available. Through measuring perceptual haptic performance and collecting qualitative feedback, we validated the usability of the system. In the end, we demonstrated our system with prospective VR scenarios showing how it enables an intuitive, empowering, and responsive haptic authoring framework.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"6 4","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TouchEditor 触摸编辑器
Lishuang Zhan, Tianyang Xiong, Hongwei Zhang, Shihui Guo, Xiaowei Chen, Jiangtao Gong, Juncong Lin, Yipeng Qin
{"title":"TouchEditor","authors":"Lishuang Zhan, Tianyang Xiong, Hongwei Zhang, Shihui Guo, Xiaowei Chen, Jiangtao Gong, Juncong Lin, Yipeng Qin","doi":"10.1145/3631454","DOIUrl":"https://doi.org/10.1145/3631454","url":null,"abstract":"A text editing solution that adapts to speech-unfriendly (inconvenient to speak or difficult to recognize speech) environments is essential for head-mounted displays (HMDs) to work universally. For existing schemes, e.g., touch bar, virtual keyboard and physical keyboard, there are shortcomings such as insufficient speed, uncomfortable experience or restrictions on user location and posture. To mitigate these restrictions, we propose TouchEditor, a novel text editing system for HMDs based on a flexible piezoresistive film sensor, supporting cursor positioning, text selection, text retyping and editing commands (i.e., Copy, Paste, Delete, etc.). Through literature overview and heuristic study, we design a pressure-controlled menu and a shortcut gesture set for entering editing commands, and propose an area-and-pressure-based method for cursor positioning and text selection that skillfully maps gestures in different areas and with different strengths to cursor movements with different directions and granularities. The evaluation results show that TouchEditor i) adapts to various contents and scenes well with a stable correction speed of 0.075 corrections per second; ii) achieves 95.4% gesture recognition accuracy; iii) reaches a considerable level with a mobile phone in text selection tasks. The comparison results with the speech-dependent EYEditor and the built-in touch bar further prove the flexibility and robustness of TouchEditor in speech-unfriendly environments.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 52","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thermal Earring 热能耳环
Qiuyue Shirley Xue, Yujia Liu, Joseph Breda, Mastafa Springston, Vikram Iyer, Shwetak Patel
{"title":"Thermal Earring","authors":"Qiuyue Shirley Xue, Yujia Liu, Joseph Breda, Mastafa Springston, Vikram Iyer, Shwetak Patel","doi":"10.1145/3631440","DOIUrl":"https://doi.org/10.1145/3631440","url":null,"abstract":"Body temperature is an important vital sign which can indicate fever and is known to be correlated with activities such as eating, exercise and stress. However, continuous temperature monitoring poses a significant challenge. We present Thermal Earring, a first-of-its-kind smart earring that enables a reliable wearable solution for continuous temperature monitoring. The Thermal Earring takes advantage of the unique position of earrings in proximity to the head, a region with tight coupling to the body unlike watches and other wearables which are more loosely worn on extremities. We develop a hardware prototype in the form factor of real earrings measuring a maximum width of 11.3 mm and a length of 31 mm, weighing 335 mg, and consuming only 14.4 uW which enables a battery life of 28 days in real-world tests. We demonstrate this form factor is small and light enough to integrate into real jewelry with fashionable designs. Additionally, we develop a dual sensor design to differentiate human body temperature change from environmental changes. We explore the use of this novel sensing platform and find its measured earlobe temperatures are stable within ±0.32 °C during periods of rest. Using these promising results, we investigate its capability of detecting fever by gathering data from 5 febrile patients and 20 healthy participants. Further, we perform the first-ever investigation of the relationship between earlobe temperature and a variety of daily activities, demonstrating earlobe temperature changes related to eating and exercise. We also find the surprising result that acute stressors such as public speaking and exams cause measurable changes in earlobe temperature. We perform multi-day in-the-wild experiments and confirm the temperature changes caused by these daily activities in natural daily scenarios. This initial exploration seeks to provide a foundation for future automatic activity detection and earring-based wearables.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"4 11","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信