Erik Pescara, Anton Stubenbord, Tobias Röddiger, Likun Fang, M. Beigl
{"title":"Where Should I Look? Comparing Reference Frames for Spatial Tactile Cues","authors":"Erik Pescara, Anton Stubenbord, Tobias Röddiger, Likun Fang, M. Beigl","doi":"10.1145/3460421.3478822","DOIUrl":"https://doi.org/10.1145/3460421.3478822","url":null,"abstract":"When designing tactile displays on the wrist for spatial cues, it is crucial to keep the natural movement of the body in mind. Depending on the movement of the wrist, different reference frames can influence the output of the wristband. In this paper, we compared two possible reference frames, one where spatial cues are fixed in a wrist-centered frame of reference, and an allocentric frame of reference which fixes spatial cues in the global coordinate system. We compared both conditions in terms of reaction time, achievable accuracy and cognitive load. Our study with 20 participants shows that utilizing the allocentric reference frame reduces cognitive load (avg. 38% reduction) and reaction time (avg. 240ms reduction), with no statistically significant difference in accuracy.","PeriodicalId":395295,"journal":{"name":"Proceedings of the 2021 ACM International Symposium on Wearable Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129303830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Sánchez Guinea, Simon Heinrich, Max Mühlhäuser
{"title":"VIDENS: Vision-based User Identification from Inertial Sensors","authors":"Alejandro Sánchez Guinea, Simon Heinrich, Max Mühlhäuser","doi":"10.1145/3460421.3480426","DOIUrl":"https://doi.org/10.1145/3460421.3480426","url":null,"abstract":"In this paper we propose the VIDENS (vision-based user identification from inertial sensors) approach, which transforms inertial sensors time-series data into images that represent in pixel form patterns found over time, allowing even a simple CNN to outperform complex ad-hoc deep learning models that combine RNNs and CNNs for user identification. Our evaluation shows promising results when comparing our approach to some relevant existing methods.","PeriodicalId":395295,"journal":{"name":"Proceedings of the 2021 ACM International Symposium on Wearable Computers","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117153345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Typing on Tap: Estimating a Finger-Worn One-Handed Chording Keyboard’s Text Entry Rate","authors":"Jason Tu, Angeline Vidhula Jeyachandra, Deepthi Nagesh, Naresh Prabhu, Thad Starner","doi":"10.1145/3460421.3480428","DOIUrl":"https://doi.org/10.1145/3460421.3480428","url":null,"abstract":"The Tap StrapTM enables eyes-free mobile typing using a one-hand device worn on the user’s fingers. Using the standard MacKenzie-Soukoreff text entry phrase set, 12 participants completed 480 minutes of practice with the keyboard on either their dominant or non-dominant hand. Average final typing rate was 22.11 words per minute (WPM), and letter accuracy increased from 85.11% to 91.02%. Non-dominant hand users typed more accurately than dominant after 420 minutes of practice.","PeriodicalId":395295,"journal":{"name":"Proceedings of the 2021 ACM International Symposium on Wearable Computers","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132553204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MOTUS: Rendering Emotions with a Wrist-worn Tactile Display","authors":"Verindi Vekemans, Ward Leenders, Sijie Zhu, Rong-Hao Liang","doi":"10.1145/3460421.3480390","DOIUrl":"https://doi.org/10.1145/3460421.3480390","url":null,"abstract":"This paper investigates whether tactile texture patterns on the wrists can be interpreted as particular emotions. A prototype watch-back tactile display, MOTUS, was implemented to press different texture patterns in various frequencies onto a wrist to convey emotions. We conducted a preliminary guessability study with the prototype. The result reveals the wearers’ agreement in interpreting the emotional states from the tactile texture patterns.","PeriodicalId":395295,"journal":{"name":"Proceedings of the 2021 ACM International Symposium on Wearable Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122865358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VäriWig: Interactive Coloring Wig Module","authors":"D. Brun, Jonna Häkkilä","doi":"10.1145/3460421.3478832","DOIUrl":"https://doi.org/10.1145/3460421.3478832","url":null,"abstract":"We present the design and prototype of VäriWig, an interactive coloring wig module, which allows traditional synthetic wigs to change colors depending on several factors, such as head movements, music rhythms and other external synchronizations. A large number of customized optical fibers coupled with individually controlled digital tricolor light-emitting diodes are employed to instantly change the colors of specific locks of hair placed all around the head. VäriWig was designed to be seamlessly blended with the traditional wig, even while inactive and is envisioned to be used in art shows and other entertaining social events.","PeriodicalId":395295,"journal":{"name":"Proceedings of the 2021 ACM International Symposium on Wearable Computers","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130215038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rawan Alharbi, Chunlin Feng, Sougata Sen, J. Jain, Josiah D. Hester, N. Alshurafa
{"title":"HeatSight: Wearable Low-power Omni Thermal Sensing","authors":"Rawan Alharbi, Chunlin Feng, Sougata Sen, J. Jain, Josiah D. Hester, N. Alshurafa","doi":"10.1145/3460421.3478811","DOIUrl":"https://doi.org/10.1145/3460421.3478811","url":null,"abstract":"Thermal information surrounding a person is a rich source for understanding and identifying personal activities. Different daily activities naturally emit distinct thermal signatures from both the human body and surrounding objects; these signatures exhibit both spatial and temporal components as objects move and thermal energy dissipates, for example, when drinking a cold beverage or smoking a cigarette. We present HeatSight, a wearable system that captures the thermal environment of the wearer and uses machine learning to infer human activity from thermal, spatial, and temporal information in that environment. We achieve this by embedding five low-power thermal sensors in a pentahedron configuration which captures a wide view of the wearer’s body and the objects they interact with. We also design a battery life-saving mechanism that selectively powers only those sensors necessary for detection. With HeatSight, we unlock thermal as an egocentric modality for future interaction research.","PeriodicalId":395295,"journal":{"name":"Proceedings of the 2021 ACM International Symposium on Wearable Computers","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124144364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of an Aesthetic for a Stroke Rehabilitation System","authors":"Galina Mihaleva, F. Kow","doi":"10.1145/3460421.3478828","DOIUrl":"https://doi.org/10.1145/3460421.3478828","url":null,"abstract":"Works around stroke rehabilitation devices have largely focused on improving their performance to aid in physical training. By providing better physical training for patients, it allows them to quickly achieve adequate autonomy in life. However, this leaves little development in their aesthetics, potentially failing to address the impact on patients’ self-integrity. This paper proposes the aesthetic development of an immersive multi-sensorial stroke rehabilitation system, entitled MIDAS (Multisensorial Immersive Dynamic Autonomous System), based on the improvement of self-affirmation of aesthetic products. MIDAS consists of three subsystems, a hand exoskeleton, a Virtual Reality (VR), and an Olfactory subsystem. The functional requirement of the system and the design language of the VR subsystem were used as the basis for the aesthetic framework for the rest of the subsystems. The outcomes were a hand exoskeleton with a minimalist linkage system paired with an organic casing, and an olfactory device with a rounded form attached to a frameless face-shield.","PeriodicalId":395295,"journal":{"name":"Proceedings of the 2021 ACM International Symposium on Wearable Computers","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122224884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RFInsole: Batteryless Gait-Monitoring Smart Insole Based on Passive RFID Tags","authors":"J. Jo, Huiju Park","doi":"10.1145/3460421.3478810","DOIUrl":"https://doi.org/10.1145/3460421.3478810","url":null,"abstract":"There are growing demands for daily gait-monitoring smart insoles which are light, soft, and comfortable both in the general population interested in sports and people with disabilities, such as children with cerebral palsy or Autistic Spectrum Disorder. Currently available technologies are basically battery-powered systems, which are bulky, heavy, and not ideal for in-home and everyday use. This study introduces RFInsole, a battery-less smart insole based on RFID (Radio Frequency Identification), tracking the sequence of foot pressures within a stance. RFInsole utilizes passive RFID tags and push button switches, so that the foot pressure can activate the tags at each location. Soft, thin, affordable, and simple structure of the device open broad possibilities to implement the same system into lighter and soft applications such as socks or pressure-sensing garments.","PeriodicalId":395295,"journal":{"name":"Proceedings of the 2021 ACM International Symposium on Wearable Computers","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125193227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MoCapaci: Posture and gesture detection in loose garments using textile cables as capacitive antennas","authors":"Hymalai Bello, Bo Zhou, Sungho Suh, P. Lukowicz","doi":"10.1145/3460421.3480418","DOIUrl":"https://doi.org/10.1145/3460421.3480418","url":null,"abstract":"We present a wearable system to detect body postures and gestures that does not require sensors to be firmly fixed to the body or integrated into a tight-fitting garment. The sensing system can be used in a loose piece of clothing such as a coat/blazer. It is based on the well-known theremin musical instrument, which we have unobtrusively integrated into a standard men’s blazer using conductive textile antennas and OpenTheremin hardware as a prototype, the ”MoCaBlazer.” Fourteen participants with diverse body sizes and balanced gender distribution mimicked 20 arm/torso movements with the unbuttoned, single-sized blazer. State-of-the-art deep learning approaches were used to achieve average recognition accuracy results of 97.18% for leave one recording out and 86.25% for user independent recognition.","PeriodicalId":395295,"journal":{"name":"Proceedings of the 2021 ACM International Symposium on Wearable Computers","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131082298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nataliya Kosmyna, Chi-Yun Hu, Yujie Wang, Qiuxuan Wu, C. Scheirer, P. Maes
{"title":"A Pilot Study using Covert Visuospatial Attention as an EEG-based Brain Computer Interface to Enhance AR Interaction","authors":"Nataliya Kosmyna, Chi-Yun Hu, Yujie Wang, Qiuxuan Wu, C. Scheirer, P. Maes","doi":"10.1145/3460421.3480420","DOIUrl":"https://doi.org/10.1145/3460421.3480420","url":null,"abstract":"In this work we propose a prototype which combines an existing augmented reality (AR) headset, the Microsoft HoloLens 2, with an electroencephalogram (EEG) Brain-Computer Interface (BCI) system based on covert visuospatial attention (CVSA) – a process of focusing attention on different regions of the visual field without overt eye movements. In this work we did not rely on any stimulus-driven responses. Fourteen participants were able to test the system over the course of two days. To the best of our knowledge, this system is the first AR EEG-BCI integrated prototype that explores the complementary features of the AR headset like HoloLens 2 and the CVSA paradigm.","PeriodicalId":395295,"journal":{"name":"Proceedings of the 2021 ACM International Symposium on Wearable Computers","volume":" 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120827050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}