Junyi Zhu, Yuxuan Lei, Aashini Shah, Gila Schein, H. Ghaednia, Joseph H. Schwab, C. Harteveld, Stefanie Mueller
{"title":"MuscleRehab: Improving Unsupervised Physical Rehabilitation by Monitoring and Visualizing Muscle Engagement","authors":"Junyi Zhu, Yuxuan Lei, Aashini Shah, Gila Schein, H. Ghaednia, Joseph H. Schwab, C. Harteveld, Stefanie Mueller","doi":"10.1145/3526113.3545705","DOIUrl":"https://doi.org/10.1145/3526113.3545705","url":null,"abstract":"Unsupervised physical rehabilitation traditionally has used motion tracking to determine correct exercise execution. However, motion tracking is not representative of the assessment of physical therapists, which focus on muscle engagement. In this paper, we investigate if monitoring and visualizing muscle engagement during unsupervised physical rehabilitation improves the execution accuracy of therapeutic exercises by showing users whether they target the right muscle groups. To accomplish this, we use wearable electrical impedance tomography (EIT) to monitor muscle engagement and visualize the current state on a virtual muscle-skeleton avatar. We use additional optical motion tracking to also monitor the user’s movement. We conducted a user study with 10 participants that compares exercise execution while seeing muscle + motion data vs. motion data only, and also presented the recorded data to a group of physical therapists for post-rehabilitation analysis. The results indicate that monitoring and visualizing muscle engagement can improve both the therapeutic exercise accuracy during rehabilitation, and post-rehabilitation evaluation for physical therapists.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"422 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126711023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Hierarchical Pointing Models","authors":"Hang Zhao, Sophia Gu, Chun Yu, Xiaojun Bi","doi":"10.1145/3526113.3545708","DOIUrl":"https://doi.org/10.1145/3526113.3545708","url":null,"abstract":"Bayesian hierarchical models are probabilistic models that have hierarchical structures and use Bayesian methods for inferences. In this paper, we extend Fitts’ law to be a Bayesian hierarchical pointing model and compare it with the typical pooled pointing models (i.e., treating all observations as the same pool), and the individual pointing models (i.e., building an individual model for each user separately). The Bayesian hierarchical pointing models outperform pooled and individual pointing models in predicting the distribution and the mean of pointing movement time, especially when the training data are sparse. Our investigation also shows that both noninformative and weakly informative priors are adequate for modeling pointing actions, although the weakly informative prior performs slightly better than the noninformative prior when the training data size is small. Overall, we conclude that the expected advantages of Bayesian hierarchical models hold for the pointing tasks. Bayesian hierarchical modeling should be adopted a more principled and effective approach of building pointing models than the current common practices in HCI which use pooled or individual models.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132668796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lingyun Sun, Jiaji Li, Junzhe Ji, Deying Pan, Mingming Li, Kuangqi Zhu, Yitao Fan, Yue Yang, Ye Tao, Guanyun Wang
{"title":"X-Bridges: Designing Tunable Bridges to Enrich 3D Printed Objects' Deformation and Stiffness","authors":"Lingyun Sun, Jiaji Li, Junzhe Ji, Deying Pan, Mingming Li, Kuangqi Zhu, Yitao Fan, Yue Yang, Ye Tao, Guanyun Wang","doi":"10.1145/3526113.3545710","DOIUrl":"https://doi.org/10.1145/3526113.3545710","url":null,"abstract":"Bridges are unique structures appeared in fused deposition modeling (FDM) that make rigid prints flexible but not fully explored. This paper presents X-Bridges, an end-to-end workflow that allows novice users to design tunable bridges that can enrich 3D printed objects' deformable and physical properties. Specifically, we firstly provide a series of deformation primitives (e.g. bend, twist, coil, compress and stretch) with three versions of stiffness (loose, elastic, stable) based on parametrized bridging experiments. Embedding the printing parameters, a design tool is developed to modify the imported 3D model, evaluate optimized printing parameters for bridges, preview shape-changing process, and generate the G-code file for 3D printing. Finally, we demonstrate the design space of X-Bridges through a set of applications that enable foldable, resilient, and interactive shape-changing objects.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133332306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SleepGuru: Personalized Sleep Planning System for Real-life Actionability and Negotiability","authors":"Jungeun Lee, Sungnam Kim, Minki Cheon, Hyojin Ju, Jaeeun Lee, Inseok Hwang","doi":"10.1145/3526113.3545709","DOIUrl":"https://doi.org/10.1145/3526113.3545709","url":null,"abstract":"Widely-accepted sleep guidelines advise regular bedtimes and sleep hygiene. An individual’s adherence is often viewed as a matter of self-regulation and anti-procrastination. We pose a question from a different perspective: What if it comes to a matter of one’s social or professional duty that mandates irregular daily life, making it incompatible with the premise of standard guidelines? We propose SleepGuru, an individually actionable sleep planning system featuring one’s real-life compatibility and extended forecast. Adopting theories on sleep physiology, SleepGuru builds a personalized predictor on the progression of the user’s sleep pressure over a course of upcoming schedules and past activities sourced from her online calendar and wearable fitness tracker. Then, SleepGuru service provides individually actionable multi-day sleep schedules which respect the user’s inevitable real-life irregularities while regulating her week-long sleep pressure. We elaborate on the underlying physiological principles and mathematical models, followed by a 3-stage study and deployment. We develop a mobile user interface providing individual predictions and adjustability backed by cloud-side optimization. We deploy SleepGuru in-the-wild to 20 users for 8 weeks, where we found positive effects of SleepGuru in sleep quality, compliance rate, sleep efficiency, alertness, long-term followability, and so on.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122977357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EtherPose: Continuous Hand Pose Tracking with Wrist-Worn Antenna Impedance Characteristic Sensing","authors":"Daehwa Kim, Chris Harrison","doi":"10.1145/3526113.3545665","DOIUrl":"https://doi.org/10.1145/3526113.3545665","url":null,"abstract":"EtherPose is a continuous hand pose tracking system employing two wrist-worn antennas, from which we measure the real-time dielectric loading resulting from different hand geometries (i.e., poses). Unlike worn camera-based methods, our RF approach is more robust to occlusion from clothing and avoids capturing potentially sensitive imagery. Through a series of simulations and empirical studies, we designed a proof-of-concept, worn implementation built around compact vector network analyzers. Sensor data is then interpreted by a machine learning backend, which outputs a fully-posed 3D hand. In a user study, we show how our system can track hand pose with a mean Euclidean joint error of 11.6 mm, even when covered in fabric. We also studied 2DOF wrist angle and micro-gesture tracking. In the future, our approach could be miniaturized and extended to include more and different types of antennas, operating at different self resonances.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128600353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyung-Kwon Ko, Subin An, Gwanmo Park, Seungkwon Kim, Daesik Kim, Bo Hyoung Kim, Jaemin Jo, Jinwook Seo
{"title":"We-toon: A Communication Support System between Writers and Artists in Collaborative Webtoon Sketch Revision","authors":"Hyung-Kwon Ko, Subin An, Gwanmo Park, Seungkwon Kim, Daesik Kim, Bo Hyoung Kim, Jaemin Jo, Jinwook Seo","doi":"10.1145/3526113.3545612","DOIUrl":"https://doi.org/10.1145/3526113.3545612","url":null,"abstract":"We present a communication support system, namely We-toon, that can bridge the webtoon writers and artists during sketch revision (i.e., character design and draft revision). In the highly iterative design process between the webtoon writers and artists, writers often have difficulties in precisely articulating their feedback on sketches owing to their lack of drawing proficiency. This drawback makes the writers rely on textual descriptions and reference images found using search engines, leading to indirect and inefficient communications. Inspired by a formative study, we designed We-toon to help writers revise webtoon sketches and effectively communicate with artists. Through a GAN-based image synthesis and manipulation, We-toon can interactively generate diverse reference images and synthesize them locally on any user-provided image. Our user study with 24 professional webtoon authors demonstrated that We-toon outperforms the traditional methods in terms of communication effectiveness and the writers’ satisfaction level related to the revised image.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128297016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Abdullah, Romeo Sommerfeld, Bjarne Sievers, Leonard Geier, J. Noack, Marcus Ding, Christoph Thieme, Laurenz Seidel, Lukas Fritzsche, Erik Langenhan, Oliver Adameck, Moritz Dzingel, T. Kern, M. Taraz, Conrad Lempert, Shohei Katakura, Hany Mohsen Elhassany, T. Roumen, Patrick Baudisch
{"title":"HingeCore: Laser-Cut Foamcore for Fast Assembly","authors":"Muhammad Abdullah, Romeo Sommerfeld, Bjarne Sievers, Leonard Geier, J. Noack, Marcus Ding, Christoph Thieme, Laurenz Seidel, Lukas Fritzsche, Erik Langenhan, Oliver Adameck, Moritz Dzingel, T. Kern, M. Taraz, Conrad Lempert, Shohei Katakura, Hany Mohsen Elhassany, T. Roumen, Patrick Baudisch","doi":"10.1145/3526113.3545618","DOIUrl":"https://doi.org/10.1145/3526113.3545618","url":null,"abstract":"We present HingeCore, a novel type of laser-cut 3D structure made from sandwich materials, such as foamcore. The key design element behind HingeCore is what we call a finger hinge, which we produce by laser-cutting foamcore “half-way”. The primary benefit of finger hinges is that they allow for very fast assembly, as they allow models to be assembled by folding and because folded hinges stay put at the intended angle, based on the friction between fingers alone, which eliminates the need for glue or tabs. Finger hinges are also highly robust, with some 5mm foamcore models withstanding 62kg. We present HingeCoreMaker, a stand-alone software tool that automatically converts 3D models to HingeCore layouts, as well as an integration into a 3D modeling tool for laser cutting (Kyub [7]). We have used HingeCoreMaker to fabricate design objects, including speakers, lamps, and a life-size bust, as well as structural objects, such as functional furniture. In our user study, participants assembled HingeCore layouts 2.9x faster than layouts generated using the state-of-the-art for plate-based assembly (Roadkill [1]).","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134516509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aleksi Ikkala, F. Fischer, Markus Klar, Miroslav Bachinski, A. Fleig, Andrew Howes, Perttu Hämäläinen, Jörg Müller, R. Murray-Smith, A. Oulasvirta
{"title":"Breathing Life Into Biomechanical User Models","authors":"Aleksi Ikkala, F. Fischer, Markus Klar, Miroslav Bachinski, A. Fleig, Andrew Howes, Perttu Hämäläinen, Jörg Müller, R. Murray-Smith, A. Oulasvirta","doi":"10.1145/3526113.3545689","DOIUrl":"https://doi.org/10.1145/3526113.3545689","url":null,"abstract":"Forward biomechanical simulation in HCI holds great promise as a tool for evaluation, design, and engineering of user interfaces. Although reinforcement learning (RL) has been used to simulate biomechanics in interaction, prior work has relied on unrealistic assumptions about the control problem involved, which limits the plausibility of emerging policies. These assumptions include direct torque actuation as opposed to muscle-based control; direct, privileged access to the external environment, instead of imperfect sensory observations; and lack of interaction with physical input devices. In this paper, we present a new approach for learning muscle-actuated control policies based on perceptual feedback in interaction tasks with physical input devices. This allows modelling of more realistic interaction tasks with cognitively plausible visuomotor control. We show that our simulated user model successfully learns a variety of tasks representing different interaction methods, and that the model exhibits characteristic movement regularities observed in studies of pointing. We provide an open-source implementation which can be extended with further biomechanical models, perception models, and interactive environments.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132410548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elliott Wen, Tharindu Kaluarachchi, Shamane Siriwardhana, Vanessa Tang, M. Billinghurst, R. Lindeman, Richard Yao, James Lin, Suranga Nanayakkara
{"title":"VRhook: A Data Collection Tool for VR Motion Sickness Research","authors":"Elliott Wen, Tharindu Kaluarachchi, Shamane Siriwardhana, Vanessa Tang, M. Billinghurst, R. Lindeman, Richard Yao, James Lin, Suranga Nanayakkara","doi":"10.1145/3526113.3545656","DOIUrl":"https://doi.org/10.1145/3526113.3545656","url":null,"abstract":"Despite the increasing popularity of VR games, one factor hindering the industry’s rapid growth is motion sickness experienced by the users. Symptoms such as fatigue and nausea severely hamper the user experience. Machine Learning methods could be used to automatically detect motion sickness in VR experiences, but generating the extensive labeled dataset needed is a challenging task. It needs either very time consuming manual labeling by human experts or modification of proprietary VR application source codes for label capturing. To overcome these challenges, we developed a novel data collection tool, VRhook, which can collect data from any VR game without needing access to its source code. This is achieved by dynamic hooking, where we can inject custom code into a game’s run-time memory to record each video frame and its associated transformation matrices. Using this, we can automatically extract various useful labels such as rotation, speed, and acceleration. In addition, VRhook can blend a customized screen overlay on top of game contents to collect self-reported comfort scores. In this paper, we describe the technical development of VRhook, demonstrate its utility with an example, and describe directions for future research.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133825981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Mcgill, G. Wilson, Daniel Medeiros, S. Brewster
{"title":"PassengXR: A Low Cost Platform for Any-Car, Multi-User, Motion-Based Passenger XR Experiences","authors":"Mark Mcgill, G. Wilson, Daniel Medeiros, S. Brewster","doi":"10.1145/3526113.3545657","DOIUrl":"https://doi.org/10.1145/3526113.3545657","url":null,"abstract":"We present PassengXR, an open-source toolkit for creating passenger eXtended Reality (XR) experiences in Unity. XR allows travellers to move beyond the physical limitations of in-vehicle displays, rendering immersive virtual content based on - or ignoring - vehicle motion. There are considerable technical challenges to using headsets in moving environments: maintaining the forward bearing of IMU-based headsets; conflicts between optical and inertial tracking of inside-out headsets; obtaining vehicle telemetry; and the high cost of design given the necessity of testing in-car. As a consequence, existing vehicular XR research typically relies on controlled, simple routes to compensate. PassengXR is a cost-effective open-source in-car passenger XR solution. We provide a reference set of COTS hardware that enables the broadcasting of vehicle telemetry to multiple headsets. Our software toolkit then provides support to correct vehicle-headset alignment, and then create a variety of passenger XR experiences, including: vehicle-locked content; motion- and location-based content; and co-located multi-passenger applications. PassengXR also supports the recording and playback of vehicle telemetry, assisting offline design without resorting to costly in-car testing. Through an evaluation-by-demonstration, we show how our platform can assist practitioners in producing novel, multi-user passenger XR experiences.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124789153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}