Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction最新文献

筛选
英文 中文
Towards Designing Human-Centered Time Management Interfaces: The Development of 14 UX Design Guidelines for Time-related Experiences in Mobile HCI 设计以人为中心的时间管理界面:移动HCI中时间相关体验的14个UX设计指南的开发
Youngsoo Shin, Jungkyoon Yoon
{"title":"Towards Designing Human-Centered Time Management Interfaces: The Development of 14 UX Design Guidelines for Time-related Experiences in Mobile HCI","authors":"Youngsoo Shin, Jungkyoon Yoon","doi":"10.1145/3447527.3474861","DOIUrl":"https://doi.org/10.1145/3447527.3474861","url":null,"abstract":"The widespread awareness of the functional role of time in human behaviors has opened up diverse research opportunities for “human-centered” time management mobile interfaces. Although a wide range of time management systems has been researched and developed, there is a need for integrated perspectives to be represented in a more systematic way to allow better implementation of extant findings. To develop human-centered time management interfaces, this paper presents the set-up of the development of UX design guidelines for time-related experiences. We investigate extant design approaches through a systematic literature review. Furthermore, we present the first version of UX design guidelines by synthesizing 31 reviewed articles and 150 relevant design considerations. We expect that our first iteration of the guidelines, which will be reviewed and revised for our next deployment study, will serve as a source of inspiration for further academic and professional research initiatives on users’ time-related experiences.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129540897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards Understanding Users’ Engagement and Enjoyment in Immersive Virtual Reality-Based Exercises 了解用户在沉浸式虚拟现实练习中的参与和享受
A. Pyae
{"title":"Towards Understanding Users’ Engagement and Enjoyment in Immersive Virtual Reality-Based Exercises","authors":"A. Pyae","doi":"10.1145/3447527.3474872","DOIUrl":"https://doi.org/10.1145/3447527.3474872","url":null,"abstract":"Exergaming has shown promising results for promoting user's wellbeing in terms of physical and cognitive benefits. Although non-immersive virtual reality-based games have proven to be useful for user's physical wellbeing, there is limited study in immersive virtual reality (iVR) for exergaming particularly in user's engagement and enjoyment. Furthermore, there is limited study in comparing user's engagement and enjoyment between iVR exercises and conventional exercises without using digital games. Hence, in this exploratory study, user's engagement and enjoyment in doing iVR exercises were explored through a survey study. The preliminary findings from a two-week survey study show that users were more engaged in iVR than conventional exercises, and a similar pattern can be found in user's enjoyment. The findings suggest that iVR exercises are potentially useful and enjoyable for users to engage in physical exercises, while they may be an alternative to conventional exercises. These preliminary findings have created opportunities for future research in iVR-based exercises for users.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125784426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
”The Dementia Diva Strikes Again!”: A Thematic Analysis of How Informal Carers of Persons with Dementia Use TikTok “痴呆天后又来了!”:对痴呆症患者的非正式护理人员如何使用TikTok的专题分析
Shiroq Al-Megren, K. Majrashi, R. Allwihan
{"title":"”The Dementia Diva Strikes Again!”: A Thematic Analysis of How Informal Carers of Persons with Dementia Use TikTok","authors":"Shiroq Al-Megren, K. Majrashi, R. Allwihan","doi":"10.1145/3447527.3474857","DOIUrl":"https://doi.org/10.1145/3447527.3474857","url":null,"abstract":"Informal carers of persons with dementia often resort to social media to alleviate their sense of social isolation and cultivate their platform to share their experience in care. The present study performed a preliminary analysis on how TikTok creators share their personal experience caring for a loved one with dementia through content shared under the hashtag #dementiacaregiver. We performed a systemic review and inductive thematic analysis of 447 TikTok posts. The content under #dementiacaregiver was interpreted to form five primary themes: (1) realities of caregiving, (2) a little levity, (3) advice for caring, (4) engagement with viewers, and (5) sensory stimulation. TikTok seems to have provided carers with a tool for artistic and social expression that fostered a sense of community and a place for remote belonging.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130858480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An Augmented Reality Guided Capture Platform for Structured and Consistent Product Reference Imagery 一种用于结构化和一致性产品参考图像的增强现实引导捕获平台
Nicole Tan, Rachana Sreedhar, Christian Vazquez, Jessica Stigile, Maria Gomez, Ashwin Subramani, Shrenik Sadalgi
{"title":"An Augmented Reality Guided Capture Platform for Structured and Consistent Product Reference Imagery","authors":"Nicole Tan, Rachana Sreedhar, Christian Vazquez, Jessica Stigile, Maria Gomez, Ashwin Subramani, Shrenik Sadalgi","doi":"10.1145/3447527.3474848","DOIUrl":"https://doi.org/10.1145/3447527.3474848","url":null,"abstract":"Within e-commerce, vendors require high quality, consistent reference imagery to showcase products. Capturing correct images is difficult when photographic intent has to be conveyed through written instructions, as lack of real-time guidance leads to subjectivity in the interpretation of directions. Hiring professional photographers to conform to specific aesthetics can be expensive. The ubiquity of mobile devices and rapid improvements in augmented reality have facilitated the creation of experiences with in-situ guidance. We present Guided Capture, a mobile application utilizing augmented reality to enable structured capture of consistent user-generated content for reference imagery with real-time feedback. Results from user studies have shown that our system creates a streamlined feedback loop to reduce possible user error, scalable across large groups of people.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129001509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
AR-some Chemistry Models:Interactable 3D Molecules through Augmented Reality AR-some化学模型:通过增强现实交互的3D分子
Mason Ulrich, Brandon Evans, F. Liu, Mina Johnson, R. Likamwa
{"title":"AR-some Chemistry Models:Interactable 3D Molecules through Augmented Reality","authors":"Mason Ulrich, Brandon Evans, F. Liu, Mina Johnson, R. Likamwa","doi":"10.1145/3447527.3474874","DOIUrl":"https://doi.org/10.1145/3447527.3474874","url":null,"abstract":"Augmented Reality (AR) presents many opportunities to design systems that can aid students in learning complex chemistry concepts. Chemistry is a 3D concept that student soften have trouble visualizing using 2D media. AR-some Chemistry Models is an AR application to visualize complex chemical molecules. Our app can generate 3D interactive molecules from chemical names, hand-drawn and printed line-angle structures of complex molecules. Allowing for handwritten input gives a student instant feedback on their self-generated understanding of a molecule and can improve knowledge retention. Students have the opportunity to visualize molecules from lecture notes, or even homework problems. Our application allows the student to interact with this molecule using simple touch screen commands to zoom in, zoom out, rotate and move the molecule in AR to understand the spatial relationship among the components. Our application also highlights and displays relevant textual information about a molecule’s functional groups when they are tapped to further improve student learning. We present an affordable and accessible study tool to help students with their learning of chemistry molecules.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114157554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fostering digital literacy through a mobile demo-kit development: Co-designing didactic prototypes with older adults 通过移动演示工具开发培养数字素养:与老年人共同设计教学原型
Katerina Cerná, Claudia Müller
{"title":"Fostering digital literacy through a mobile demo-kit development: Co-designing didactic prototypes with older adults","authors":"Katerina Cerná, Claudia Müller","doi":"10.1145/3447527.3474849","DOIUrl":"https://doi.org/10.1145/3447527.3474849","url":null,"abstract":"Developing toolkits as a support of participatory design is a common approach when designing with and for older adults. The key aspect in designing digital tools is digital literacy of the participants and how to sustain it during the project but also after its end. Yet, not enough attention has been paid to how to use such toolkits to make PD projects results sustainable. To address this issue, we are developing a mobile demo-kit, a set of didactic prototypes, which aims to foster older participants’ digital literacy and hence make findings sustainable. We illustrate it on a practice-based study, during which we conducted participatory observation, a series of interviews and organized a series of participatory workshops online with older adults. Our preliminary findings contribute to discussion on making PD with and for older adults sustainable by focusing on what older adults can learn during the PD, how to support this process but also how to communicate the findings further on.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126287248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SiteSCAN: Intuitive Exchange of Spatial Information in Mixed Reality SiteSCAN:混合现实中空间信息的直观交换
Yu Chen Chen, Kai-Hsin Chiu, Yu Ju Yang, Tsaiping Chang, Li Fei Kung, Kaiyuan Lin
{"title":"SiteSCAN: Intuitive Exchange of Spatial Information in Mixed Reality","authors":"Yu Chen Chen, Kai-Hsin Chiu, Yu Ju Yang, Tsaiping Chang, Li Fei Kung, Kaiyuan Lin","doi":"10.1145/3447527.3474876","DOIUrl":"https://doi.org/10.1145/3447527.3474876","url":null,"abstract":"Nowadays, people are shown information via different platforms and are bombarded with overwhelming information as it accumulates rapidly over time. Consequently, suitable and efficient approaches to access specific information become crucial in our daily lives. However, there’s barely a way to directly get information from our surroundings, and the existing display of information fails to describe its relation to space precisely. This research introduces SiteSCAN, a system that provides an intuitive design solution to exchange and explore spatial information. By letting people directly build the information onto our physical environment and allowing them to access the information from it, we connect the information to the surroundings it is intended for. The system also updates real-time information and enables network effect that supports system to grow gradually. Through image retrieval technology and geolocation positioning, the system can identify user’s current location and the surrounding space, which helps them retrieve information or attach information to a specific spot. With the help of SiteSCAN, users will experience an intuitive interaction with spatial information and be presented with the seamless integration of virtual and physical reality.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131256463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EyeKnowYou: A DIY Toolkit to Support Monitoring Cognitive Load and Actual Screen Time using a Head-Mounted Webcam EyeKnowYou:一个DIY工具包,支持使用头戴式网络摄像头监测认知负荷和实际屏幕时间
Tharindu Kaluarachchi, Shardul Sapkota, Jules Taradel, Aristée Thevenon, Denys J. C. Matthies, Suranga Nanayakkara
{"title":"EyeKnowYou: A DIY Toolkit to Support Monitoring Cognitive Load and Actual Screen Time using a Head-Mounted Webcam","authors":"Tharindu Kaluarachchi, Shardul Sapkota, Jules Taradel, Aristée Thevenon, Denys J. C. Matthies, Suranga Nanayakkara","doi":"10.1145/3447527.3474850","DOIUrl":"https://doi.org/10.1145/3447527.3474850","url":null,"abstract":"Studies show that frequent screen exposure and increased cognitive load can cause mental-health issues. Although expensive systems capable of detecting cognitive load and timers counting on-screen time exist, literature has yet to demonstrate measuring both factors across devices. To address this, we propose an inexpensive DIY-approach using a single head-mounted webcam capturing the user’s eye. By classifying camera feed using a 3D Convolutional Neural Network, we can determine increased cognitive load and actual screen time. This works because the camera feed contains corneal surface reflection, as well as physiological parameters that contain information on cognitive load. Even with a small data set, we were able to develop generalised models showing 70% accuracy. To increase the models’ accuracy, we seek the community’s help by contributing more raw data. Therefore, we provide an opensource software and a DIY-guide to make our toolkit accessible to human factors researchers without an engineering background.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133878217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Low-level Voice and Hand-Tracking Interaction Actions: Explorations with Let's Go There 低级语音和手部跟踪交互动作:Let's Go There的探索
Jaisie Sin, Cosmin Munteanu
{"title":"Low-level Voice and Hand-Tracking Interaction Actions: Explorations with Let's Go There","authors":"Jaisie Sin, Cosmin Munteanu","doi":"10.1145/3447527.3474875","DOIUrl":"https://doi.org/10.1145/3447527.3474875","url":null,"abstract":"Hand-tracking allows users to engage with a virtual environment with their own hands, rather than the more traditional method of using accompanying controllers in order to operate the device they are using and interact with the virtual world. We seek to explore the range of low-level interaction actions and high-level interaction tasks and domains can be associated with the multimodal hand-tracking and voice input in VR. Thus, we created Let's Go There, which explores this joint-input method. So far, we have identified four low-level interaction actions which are exemplified by this demo: positioning oneself, positioning others, selection, and information assignment. We anticipate potential high-level interaction tasks and domains to include customer service training, social skills training, and cultural competency training (e.g. when interacting with older adults). Let's Go There, the system described in this paper, had been previously demonstrated at CUI 2020 and MobileHCI 2021. We have since updated our approach to its development to separate it into low- and high-level interactions. Thus, we believe there is value in bringing it to MobileHCI again to highlight these different types of interactions for further showcase and discussion.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129926606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the influence of stress on sedentary workers’ sitting behavior in screen-based interaction context 了解压力对久坐工作者在基于屏幕的互动情境下坐着行为的影响
Yang Chen, C. Yen
{"title":"Understanding the influence of stress on sedentary workers’ sitting behavior in screen-based interaction context","authors":"Yang Chen, C. Yen","doi":"10.1145/3447527.3474854","DOIUrl":"https://doi.org/10.1145/3447527.3474854","url":null,"abstract":"Sedentary work has become a significant part of a workday context, and this situation becomes more salient due to the COVID-19 pandemic. Existing studies show that body posture can be a good indicator of emotional states. However, to our knowledge, there are no studies that investigated the relationship between the characteristics of whole-body regions while seated and affective information related to stress for sedentary workers in screen-based working scenarios. This paper conducted a preliminary within-subjects study with eight participants performing three types of screen-based tasks at different difficulty levels for simulating natural working conditions. We developed a rapid posture coding technique to analyze sedentary workers’ real-time sitting behavior and deployed multiple methods for continuously detecting their stress conditions. The results indicated that stress conditions and task determinants play an essential role in the postural changes of different body regions while seated. Our preliminary findings provide design implications and recommendations for developing a more unobtrusive health-promoting system in the real-life working context.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123103122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信