Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

筛选
英文 中文
PackquID: In-packet Liquid Identification Using RF Signals PackquID:包内液体识别使用射频信号
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-01-01 DOI: 10.1145/3569469
Fei Shang, Panlong Yang, Yubo Yan, Xiangyang Li
{"title":"PackquID: In-packet Liquid Identification Using RF Signals","authors":"Fei Shang, Panlong Yang, Yubo Yan, Xiangyang Li","doi":"10.1145/3569469","DOIUrl":"https://doi.org/10.1145/3569469","url":null,"abstract":"There are many scenarios where the liquid is occluded by other items ( e.g","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"21 1","pages":"181:1-181:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73214010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SpARklingPaper: Enhancing Common Pen- And Paper-Based Handwriting Training for Children by Digitally Augmenting Papers Using a Tablet Screen SpARklingPaper:通过使用平板电脑屏幕数字增强纸张,增强儿童的普通笔和纸手写训练
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-01-01 DOI: 10.1145/3550337
T. Drey, Jessica Janek, Josef Lang, Dietmar Puschmann, Michael Rietzler, E. Rukzio
{"title":"SpARklingPaper: Enhancing Common Pen- And Paper-Based Handwriting Training for Children by Digitally Augmenting Papers Using a Tablet Screen","authors":"T. Drey, Jessica Janek, Josef Lang, Dietmar Puschmann, Michael Rietzler, E. Rukzio","doi":"10.1145/3550337","DOIUrl":"https://doi.org/10.1145/3550337","url":null,"abstract":"Educational apps support learning, but handwriting training is still based on analog pen- and paper. However, training handwriting with apps can negatively affect graphomotor handwriting skills due to the different haptic feedback of the tablet, stylus, or finger compared to pen and paper. With SpARklingPaper, we are the first to combine the genuine haptic feedback of analog pen and paper with the digital support of apps. Our artifact contribution enables children to write with any pen on a standard paper placed on a tablet’s screen, augmenting the paper from below, showing animated letters and individual feedback. We conducted two online surveys with overall 29 parents and teachers of elementary school pupils and a user study with 13 children and 13 parents for evaluation. Our results show the importance of the genuine analog haptic feedback combined with the augmentation of SpARklingPaper. It was rated superior compared to our stylus baseline condition regarding pen-handling, writing training-success, motivation, and overall impression. SpARklingPaper can be a blueprint for high-fidelity haptic feedback handwriting training systems.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"102 1","pages":"113:1-113:29"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73864905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments 智能家居环境中基于深度可解释传感器的活动识别
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-01-01 DOI: 10.1145/3517224
Luca Arrotta, Gabriele Civitarese, C. Bettini
{"title":"DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments","authors":"Luca Arrotta, Gabriele Civitarese, C. Bettini","doi":"10.1145/3517224","DOIUrl":"https://doi.org/10.1145/3517224","url":null,"abstract":"The sensor-based recognition of Activities of Daily Living (ADLs) in smart-home environments is an active research area, with relevant applications in healthcare and ambient assisted living. The application of Explainable Artificial Intelligence (XAI) to ADLs recognition has the potential of making this process trusted, transparent and understandable. The few works that investigated this problem considered only interpretable machine learning models. In this work, we propose DeXAR, a novel methodology to transform sensor data into semantic images to take advantage of XAI methods based on Convolutional Neural Networks (CNN). We apply different XAI approaches for deep learning and, from the resulting heat maps, we generate explanations in natural language. In order to identify the most effective XAI method, we performed extensive experiments on two different datasets, with both a common-knowledge and a user-based evaluation. The results of a user study show that the white-box XAI method based on prototypes is the most effective.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"14 1","pages":"1:1-1:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81979148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
HearFire: Indoor Fire Detection via Inaudible Acoustic Sensing HearFire:通过听不见的声音感应进行室内火灾探测
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-01-01 DOI: 10.1145/3569500
Z. Wang
{"title":"HearFire: Indoor Fire Detection via Inaudible Acoustic Sensing","authors":"Z. Wang","doi":"10.1145/3569500","DOIUrl":"https://doi.org/10.1145/3569500","url":null,"abstract":"Indoor conflagration causes a large number of casualties and property losses worldwide every year. Yet existing indoor fire detection systems either suffer from short sensing range (e.g., ≤ 0.5m using a thermometer), susceptible to interferences (e.g., smoke detector) or high computational and deployment overhead (e.g., cameras, Wi-Fi). This paper proposes HearFire, a cost-effective, easy-to-use and timely room-scale fire detection system via acoustic sensing. HearFire consists of a collocated commodity speaker and microphone pair, which remotely senses fire by emitting inaudible sound waves. Unlike existing works that use signal reflection effect to fulfill acoustic sensing tasks, HearFire leverages sound absorption and sound speed variations to sense the fire due to unique physical properties of flame. Through a deep analysis of sound transmission, HearFire effectively achieves room-scale sensing by correlating the relationship between the transmission signal length and sensing distance. The transmission frame is carefully selected to expand sensing range and balance a series of practical factors that impact the system’s performance. We further design a simple yet effective approach to remove the environmental interference caused by signal reflection by conducting a deep investigation into channel differences between sound reflection and sound absorption. Specifically, sound reflection results in a much more stable pattern in terms of signal energy than sound absorption, which can be exploited to differentiate the channel measurements caused by fire from other interferences. Extensive experiments demonstrate that HireFire enables a maximum 7m sensing range and achieves timely fire detection in indoor environments with up to 99 . 2% accuracy under different experiment configurations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"28 1","pages":"185:1-185:25"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78973624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HIPPO: Pervasive Hand-Grip Estimation from Everyday Interactions 河马:从日常互动中估计普遍的握力
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-01-01 DOI: 10.1145/3570344
Zhigang Yin, M. Liyanage, Abdul-Rasheed Ottun, Souvik Paul, Agustin Zuniga, P. Nurmi, Huber Flores
{"title":"HIPPO: Pervasive Hand-Grip Estimation from Everyday Interactions","authors":"Zhigang Yin, M. Liyanage, Abdul-Rasheed Ottun, Souvik Paul, Agustin Zuniga, P. Nurmi, Huber Flores","doi":"10.1145/3570344","DOIUrl":"https://doi.org/10.1145/3570344","url":null,"abstract":"Hand-grip strength is widely used to estimate muscle strength and it serves as a general indicator of the overall health of a person, particularly in aging adults. Hand-grip strength is typically estimated using dynamometers or specialized force resistant pressure sensors embedded onto objects. Both of these solutions require the user to interact with a dedicated measurement device which unnecessarily restricts the contexts where estimates are acquired. We contribute HIPPO, a novel non-intrusive and opportunistic method for estimating hand-grip strength from everyday interactions with objects. HIPPO re-purposes light sensors available in wearables (e.g., rings or gloves) to capture changes in light reflectivity when people interact with objects. This allows HIPPO to non-intrusively piggyback everyday interactions for health information without affecting the user’s everyday routines. We present two prototypes integrating HIPPO, an early smart glove proof-of-concept, and a further optimized solution that uses sensors integrated onto a ring. We validate HIPPO through extensive experiments and compare HIPPO against three baselines, including a clinical dynamometer. Our results show that HIPPO operates robustly across a wide range of everyday objects, and participants. The force strength estimates correlate with estimates produced by pressure-based devices, and can also determine the correct hand grip strength category with up to 86% accuracy. Our findings also suggest that users prefer our approach to existing solutions as HIPPO blends the estimation with everyday interactions.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"61 1","pages":"209:1-209:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74486798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction 鞋++:一种智能的可拆卸鞋底,用于社交脚对脚的互动
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-01-01 DOI: 10.1145/3534620
Zihan Yan, Jiayi Zhou, Wu Yufei, Guanhong Liu, Danli Luo, Zi Zhou, Mi Haipeng, Lingyun Sun, Xiang 'Anthony' Chen, Yang Zhang, Guanyun Wang
{"title":"Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction","authors":"Zihan Yan, Jiayi Zhou, Wu Yufei, Guanhong Liu, Danli Luo, Zi Zhou, Mi Haipeng, Lingyun Sun, Xiang 'Anthony' Chen, Yang Zhang, Guanyun Wang","doi":"10.1145/3534620","DOIUrl":"https://doi.org/10.1145/3534620","url":null,"abstract":"Feet are the foundation of our bodies that not only perform locomotion but also participate in intent and emotion expression. Thus, foot gestures are an intuitive and natural form of expression for interpersonal interaction. Recent studies have mostly introduced smart shoes as personal gadgets, while foot gestures used in multi-person foot interactions in social scenarios remain largely unexplored. We present Shoes++, which includes an inertial measurement unit (IMU)-mounted sole and an input vocabulary of social foot-to-foot gestures to support foot-based interaction. The gesture vocabulary is derived and condensed by a set of gestures elicited from a participatory design session with 12 users. We implement a machine learning model in Shoes++ which can recognize two-person and three-person social foot-to-foot gestures with 94.3% and 96.6% accuracies (N=18). In addition, the sole is designed to easily attach to and detach from various daily shoes to support comfortable social foot interaction without taking off the shoes. Based on users’ qualitative feedback, we also found that Shoes++ can support team collaboration and enhance emotion expression, thus making social interactions or interpersonal dynamics more engaging in an expanded design space. Additional Key and smart sole Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, (June 2022),","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"8 1","pages":"85:1-85:29"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74092177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
StretchAR: Exploiting Touch and Stretch as a Method of Interaction for Smart Glasses Using Wearable Straps StretchAR:利用触摸和拉伸作为使用可穿戴带的智能眼镜的交互方法
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-01-01 DOI: 10.1145/3550305
Luis Paredes, Ananya Ipsita, J. C. Mesa, Ramses V. Martinez Garrido, K. Ramani
{"title":"StretchAR: Exploiting Touch and Stretch as a Method of Interaction for Smart Glasses Using Wearable Straps","authors":"Luis Paredes, Ananya Ipsita, J. C. Mesa, Ramses V. Martinez Garrido, K. Ramani","doi":"10.1145/3550305","DOIUrl":"https://doi.org/10.1145/3550305","url":null,"abstract":"presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users’ interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch–stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses. Exploiting as","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"9 1","pages":"134:1-134:26"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76663092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
BLEselect: Gestural IoT Device Selection via Bluetooth Angle of Arrival Estimation from Smart Glasses BLEselect:通过智能眼镜的蓝牙到达角度估计进行手势物联网设备选择
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-01-01 DOI: 10.1145/3569482
Tengxiang Zhang, Zitong Lan, Chenren Xu, Yanrong Li, Yiqiang Chen
{"title":"BLEselect: Gestural IoT Device Selection via Bluetooth Angle of Arrival Estimation from Smart Glasses","authors":"Tengxiang Zhang, Zitong Lan, Chenren Xu, Yanrong Li, Yiqiang Chen","doi":"10.1145/3569482","DOIUrl":"https://doi.org/10.1145/3569482","url":null,"abstract":"Spontaneous selection of IoT devices from the head-mounted device is key for user-centered pervasive interaction. BLEselect enables users to select an unmodified Bluetooth 5.1 compatible IoT device by nodding at, pointing at, or drawing a circle in the air around it. We designed a compact antenna array that fits on a pair of smart glasses to estimate the Angle of Arrival (AoA) of IoT and wrist-worn devices’ advertising signals. We then developed a sensing pipeline that supports all three selection gestures with lightweight machine learning models, which are trained in real-time for both hand gestures. Extensive characterizations and evaluations show that our system is accurate, natural, low-power, and privacy-preserving. Despite the small effective size of the antenna array, our system achieves a higher than 90% selection accuracy within a 3 meters distance in front of the user. In a user study that mimics real-life usage cases, the overall selection accuracy is 96.7% for a diverse set of 22 participants in terms of age, technology savviness, and body structures.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"280 1","pages":"198:1-198:28"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80136760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
WristAcoustic: Through-Wrist Acoustic Response Based Authentication for Smartwatches 腕带声学:智能手表的基于腕带声学响应的认证
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-01-01 DOI: 10.1145/3569473
J. Huh, Hyejin Shin, Hongmin Kim, Eunyong Cheon, Young-sok Song, Choong-Hoon Lee, Ian Oakley
{"title":"WristAcoustic: Through-Wrist Acoustic Response Based Authentication for Smartwatches","authors":"J. Huh, Hyejin Shin, Hongmin Kim, Eunyong Cheon, Young-sok Song, Choong-Hoon Lee, Ian Oakley","doi":"10.1145/3569473","DOIUrl":"https://doi.org/10.1145/3569473","url":null,"abstract":"","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"6 1","pages":"167:1-167:34"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90000381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiSee: A Headphone that Provides All-day Assistance for Blind and Low-vision Users to Reach Surrounding Objects LiSee:一款全天帮助盲人和低视力用户接触周围物体的耳机
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. Pub Date : 2022-01-01 DOI: 10.1145/3550282
Kaixin Chen, Yongzhi Huang, Yicong Chen, Haobin Zhong, Lihua Lin, Lu Wang, Kaishun Wu
{"title":"LiSee: A Headphone that Provides All-day Assistance for Blind and Low-vision Users to Reach Surrounding Objects","authors":"Kaixin Chen, Yongzhi Huang, Yicong Chen, Haobin Zhong, Lihua Lin, Lu Wang, Kaishun Wu","doi":"10.1145/3550282","DOIUrl":"https://doi.org/10.1145/3550282","url":null,"abstract":"Reaching surrounding target objects is difficult for blind and low-vision (BLV) users, affecting their daily life. Based on interviews and exchanges, we propose an unobtrusive wearable system called LiSee to provide BLV users with all-day assistance. Following a user-centered design method, we carefully designed the LiSee prototype, which integrates various electronic components and is disguised as a neckband headphone such that it is an extension of the existing headphone. The top-level software includes a series of seamless image processing algorithms to solve the challenges brought by the unconstrained wearable form so as to ensure excellent real-time performance. Moreover, users are provided with a personalized guidance scheme so that they can use LiSee quickly based on their personal expertise. Finally, a system evaluation and a user study were completed in the laboratory and participants’ homes. The results show that LiSee works robustly, indicating that it can meet the daily needs of most participants to reach surrounding objects.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"17 1","pages":"104:1-104:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82675803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信