Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

筛选
英文 中文
Powered by AI 以人工智能为动力
Mayara Costa Figueiredo, Elizabeth A. Ankrah, Jacquelyn E. Powell, Daniel A. Epstein, Yunan Chen
{"title":"Powered by AI","authors":"Mayara Costa Figueiredo, Elizabeth A. Ankrah, Jacquelyn E. Powell, Daniel A. Epstein, Yunan Chen","doi":"10.1145/3631414","DOIUrl":"https://doi.org/10.1145/3631414","url":null,"abstract":"Recently, there has been a proliferation of personal health applications describing to use Artificial Intelligence (AI) to assist health consumers in making health decisions based on their data and algorithmic outputs. However, it is still unclear how such descriptions influence individuals' perceptions of such apps and their recommendations. We therefore investigate how current AI descriptions influence individuals' attitudes towards algorithmic recommendations in fertility self-tracking through a simulated study using three versions of a fertility app. We found that participants preferred AI descriptions with explanation, which they perceived as more accurate and trustworthy. Nevertheless, they were unwilling to rely on these apps for high-stakes goals because of the potential consequences of a failure. We then discuss the importance of health goals for AI acceptance, how literacy and assumptions influence perceptions of AI descriptions and explanations, and the limitations of transparency in the context of algorithmic decision-making for personal health.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"9 11","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflected Reality 反映现实
Qiushi Zhou, B. V. Syiem, Beier Li, Eduardo Velloso
{"title":"Reflected Reality","authors":"Qiushi Zhou, B. V. Syiem, Beier Li, Eduardo Velloso","doi":"10.1145/3631431","DOIUrl":"https://doi.org/10.1145/3631431","url":null,"abstract":"We propose Reflected Reality: a new dimension for augmented reality that expands the augmented physical space into mirror reflections. By synchronously tracking the physical space in front of the mirror and the reflection behind it using an AR headset and an optional smart mirror component, reflected reality enables novel AR interactions that allow users to use their physical and reflected bodies to find and interact with virtual objects. We propose a design space for AR interaction with mirror reflections, and instantiate it using a prototype system featuring a HoloLens 2 and a smart mirror. We explore the design space along the following dimensions: the user's perspective of input, the spatial frame of reference, and the direction of the mirror space relative to the physical space. Using our prototype, we visualise a use case scenario that traverses the design space to demonstrate its interaction affordances in a practical context. To understand how users perceive the intuitiveness and ease of reflected reality interaction, we conducted an exploratory and a formal user evaluation studies to characterise user performance of AR interaction tasks in reflected reality. We discuss the unique interaction affordances that reflected reality offers, and outline possibilities of its future applications.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"2 4","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RimSense 边缘感应
Wentao Xie, Huangxun Chen, Jing Wei, Jin Zhang, Qian Zhang
{"title":"RimSense","authors":"Wentao Xie, Huangxun Chen, Jing Wei, Jin Zhang, Qian Zhang","doi":"10.1145/3631456","DOIUrl":"https://doi.org/10.1145/3631456","url":null,"abstract":"Smart eyewear's interaction mode has attracted significant research attention. While most commercial devices have adopted touch panels situated on the temple front of eyeglasses for interaction, this paper identifies a drawback stemming from the unparalleled plane between the touch panel and the display, which disrupts the direct mapping between gestures and the manipulated objects on display. Therefore, this paper proposes RimSense, a proof-of-concept design for smart eyewear, to introduce an alternative realm for interaction - touch gestures on eyewear rim. RimSense leverages piezoelectric (PZT) transducers to convert the eyeglass rim into a touch-sensitive surface. When users touch the rim, the alteration in the eyeglass's structural signal manifests its effect into a channel frequency response (CFR). This allows RimSense to recognize the executed touch gestures based on the collected CFR patterns. Technically, we employ a buffered chirp as the probe signal to fulfil the sensing granularity and noise resistance requirements. Additionally, we present a deep learning-based gesture recognition framework tailored for fine-grained time sequence prediction and further integrated with a Finite-State Machine (FSM) algorithm for event-level prediction to suit the interaction experience for gestures of varying durations. We implement a functional eyewear prototype with two commercial PZT transducers. RimSense can recognize eight touch gestures on the eyeglass rim and estimate gesture durations simultaneously, allowing gestures of varying lengths to serve as distinct inputs. We evaluate the performance of RimSense on 30 subjects and show that it can sense eight gestures and an additional negative class with an F1-score of 0.95 and a relative duration estimation error of 11%. We further make the system work in real-time and conduct a user study on 14 subjects to assess the practicability of RimSense through interactions with two demo applications. The user study demonstrates RimSense's good performance, high usability, learnability and enjoyability. Additionally, we conduct interviews with the subjects, and their comments provide valuable insight for future eyewear design.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 9","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning from User-driven Events to Generate Automation Sequences 从用户驱动的事件中学习生成自动化序列
Yunpeng Song, Yiheng Bian, Xiaorui Wang, Zhongmin Cai
{"title":"Learning from User-driven Events to Generate Automation Sequences","authors":"Yunpeng Song, Yiheng Bian, Xiaorui Wang, Zhongmin Cai","doi":"10.1145/3631427","DOIUrl":"https://doi.org/10.1145/3631427","url":null,"abstract":"Enabling smart devices to learn automating actions as expected is a crucial yet challenging task. The traditional Trigger-Action rule approach for device automation is prone to ambiguity in complex scenarios. To address this issue, we propose a data-driven approach that leverages recorded user-driven event sequences to predict potential actions users may take and generate fine-grained device automation sequences. Our key intuition is that user-driven event sequences, like human-written articles and programs, are governed by consistent semantic contexts and contain regularities that can be modeled to generate sequences that express the user's preferences. We introduce ASGen, a deep learning framework that combines sequential information, event attributes, and external knowledge to form the event representation and output sequences of arbitrary length to facilitate automation. To evaluate our approach from both quantitative and qualitative perspectives, we conduct two studies using a realistic dataset containing over 4.4 million events. Our results show that our approach surpasses other methods by providing more accurate recommendations. And the automation sequences generated by our model are perceived as equally or even more rational and useful compared to those generated by humans.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"11 4","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSS-Tag FSS 日
Liqiong Chang, Xiaofeng Yang, Ruyue Liu, Guodong Xie, Fuwei Wang, Ju Wang
{"title":"FSS-Tag","authors":"Liqiong Chang, Xiaofeng Yang, Ruyue Liu, Guodong Xie, Fuwei Wang, Ju Wang","doi":"10.1145/3631457","DOIUrl":"https://doi.org/10.1145/3631457","url":null,"abstract":"Material sensing is crucial in many emerging applications, such as waste classification and hazardous material detection. Although existing Radio Frequency (RF) signal based systems achieved great success, they have limited identification accuracy when either RF signals can not penetrate through a target or a target has different outer and inner materials. This paper introduces a Frequency Selective Surface (FSS) tag based high accuracy material identification system, namely FSS-Tag, which utilises both the penetrating signals and the coupling effect. Specifically, we design and attach a FSS tag to a target, and use frequency responses of the tag for material sensing, since different target materials have different frequency responses. The key advantage of our system is that, when RF signals pass through a target with the FSS tag, the penetrating signal responds more to the inner material, and the coupling effect (between the target and the tag) reflects more about the outer material; thus, one can achieve a higher sensing accuracy. The challenge lies in how to find optimal tag design parameters so that the frequency response of different target materials can be clearly distinguished. We address this challenge by establishing a tag parameter optimization model. Real-world experiments show that FSS-Tag achieves more than 91% accuracy on identifying eight common materials, and improves the accuracy by up to 38% and 8% compared with the state of the art (SOTA) penetrating signal based method TagScan and the SOTA coupling effect based method Tagtag.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"10 42","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PyroSense PyroSense
Huaili Zeng, Gen Li, Tianxing Li
{"title":"PyroSense","authors":"Huaili Zeng, Gen Li, Tianxing Li","doi":"10.1145/3631435","DOIUrl":"https://doi.org/10.1145/3631435","url":null,"abstract":"We present PyroSense, the first-of-its-kind system that enables fine-grained 3D posture reconstruction using ubiquitous COTS passive infrared sensor (PIR sensor). PyroSense senses heat signals generated by the human body and airflow due to body movement to reconstruct the corresponding human postures in real time. PyroSense greatly advances the prior PIR-based sensing design by improving the sensitivity of COTS PIR sensor to body movement, increasing spatial resolution without additional deployment overhead, and designing intellectual algorithms to adapt to diverse environmental factors. We build a low-cost PyroSense prototype using off-the-shelf hardware components. The experimental findings indicate that PyroSense not only attains a classification accuracy of 99.46% across 15 classes, but it also registers a mean joint distance error of less than 16 cm for 14 body joints for posture reconstruction in challenging environments.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"13 5","pages":"1 - 32"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MIRROR 镜子
Dong-Sig Kang, Eunsu Baek, S. Son, Youngki Lee, Taesik Gong, Hyung-Sin Kim
{"title":"MIRROR","authors":"Dong-Sig Kang, Eunsu Baek, S. Son, Youngki Lee, Taesik Gong, Hyung-Sin Kim","doi":"10.1145/3631420","DOIUrl":"https://doi.org/10.1145/3631420","url":null,"abstract":"We present MIRROR, an on-device video virtual try-on (VTO) system that provides realistic, private, and rapid experiences in mobile clothes shopping. Despite recent advancements in generative adversarial networks (GANs) for VTO, designing MIRROR involves two challenges: (1) data discrepancy due to restricted training data that miss various poses, body sizes, and backgrounds and (2) local computation overhead that uses up 24% of battery for converting only a single video. To alleviate the problems, we propose a generalizable VTO GAN that not only discerns intricate human body semantics but also captures domain-invariant features without requiring additional training data. In addition, we craft lightweight, reliable clothes/pose-tracking that generates refined pixel-wise warping flow without neural-net computation. As a holistic system, MIRROR integrates the new VTO GAN and tracking method with meticulous pre/post-processing, operating in two distinct phases (on/offline). Our results on Android smartphones and real-world user videos show that compared to a cutting-edge VTO GAN, MIRROR achieves 6.5× better accuracy with 20.1× faster video conversion and 16.9× less energy consumption.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"11 51","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing Data Visualisations for Self-Compassion in Personal Informatics 为个人信息学中的自我同情设计数据可视化
Meagan B. Loerakker, Jasmin Niess, Marit Bentvelzen, Paweł W. Woźniak
{"title":"Designing Data Visualisations for Self-Compassion in Personal Informatics","authors":"Meagan B. Loerakker, Jasmin Niess, Marit Bentvelzen, Paweł W. Woźniak","doi":"10.1145/3631448","DOIUrl":"https://doi.org/10.1145/3631448","url":null,"abstract":"Wearable personal trackers offer exciting opportunities to contribute to one's well-being, but they also can foster negative experiences. It remains a challenge to understand how we can design personal informatics experiences that help users frame their data in a positive manner and foster self-compassion. To explore this, we conducted a study where we compared different visualisations for user-generated screen time data. We examined positive, neutral and negative framings of the data and whether or not a point of reference was provided in a visualisation. The results show that framing techniques have a significant effect on reflection, rumination and self-compassion. We contribute insights into what design features of data representations can support positive experiences in personal informatics.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 8","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UniFi UniFi
Yan Liu, Anlan Yu, Leye Wang, Bin Guo, Yang Li, E. Yi, Daqing Zhang
{"title":"UniFi","authors":"Yan Liu, Anlan Yu, Leye Wang, Bin Guo, Yang Li, E. Yi, Daqing Zhang","doi":"10.1145/3631429","DOIUrl":"https://doi.org/10.1145/3631429","url":null,"abstract":"In recent years, considerable endeavors have been devoted to exploring Wi-Fi-based sensing technologies by modeling the intricate mapping between received signals and corresponding human activities. However, the inherent complexity of Wi-Fi signals poses significant challenges for practical applications due to their pronounced susceptibility to deployment environments. To address this challenge, we delve into the distinctive characteristics of Wi-Fi signals and distill three pivotal factors that can be leveraged to enhance generalization capabilities of deep learning-based Wi-Fi sensing models: 1) effectively capture valuable input to mitigate the adverse impact of noisy measurements; 2) adaptively fuse complementary information from multiple Wi-Fi devices to boost the distinguishability of signal patterns associated with different activities; 3) extract generalizable features that can overcome the inconsistent representations of activities under different environmental conditions (e.g., locations, orientations). Leveraging these insights, we design a novel and unified sensing framework based on Wi-Fi signals, dubbed UniFi, and use gesture recognition as an application to demonstrate its effectiveness. UniFi achieves robust and generalizable gesture recognition in real-world scenarios by extracting discriminative and consistent features unrelated to environmental factors from pre-denoised signals collected by multiple transceivers. To achieve this, we first introduce an effective signal preprocessing approach that captures the applicable input data from noisy received signals for the deep learning model. Second, we propose a multi-view deep network based on spatio-temporal cross-view attention that integrates multi-carrier and multi-device signals to extract distinguishable information. Finally, we present the mutual information maximization as a regularizer to learn environment-invariant representations via contrastive loss without requiring access to any signals from unseen environments for practical adaptation. Extensive experiments on the Widar 3.0 dataset demonstrate that our proposed framework significantly outperforms state-of-the-art approaches in different settings (99% and 90%-98% accuracy for in-domain and cross-domain recognition without additional data collection and model training).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"14 8","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ADA-SHARK ADA-SHARK
Marvin Martin, Etienne Meunier, P. Moreau, Jean-Eudes Gadenne, J. Dautel, Félicien Catherin, Eugene Pinsky, Reza Rawassizadeh
{"title":"ADA-SHARK","authors":"Marvin Martin, Etienne Meunier, P. Moreau, Jean-Eudes Gadenne, J. Dautel, Félicien Catherin, Eugene Pinsky, Reza Rawassizadeh","doi":"10.1145/3631416","DOIUrl":"https://doi.org/10.1145/3631416","url":null,"abstract":"Due to global warming, sharks are moving closer to the beaches, affecting the risk to humans and their own lives. Within the past decade, several technologies were developed to reduce the risks for swimmers and surfers. This study proposes a robust method based on computer vision to detect sharks using an underwater camera monitoring system to secure coastlines. The system is autonomous, environment-friendly, and requires low maintenance. 43,679 images extracted from 175 hours of videos of marine life were used to train our algorithms. Our approach allows the collection and analysis of videos in real-time using an autonomous underwater camera connected to a smart buoy charged with solar panels. The videos are processed by a Domain Adversarial Convolutional Neural Network to discern sharks regardless of the background environment with an F2-score of 83.2% and a recall of 90.9%, while human experts have an F2-score of 94% and a recall of 95.7%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 2","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信