J. Paay, J. Kjeldskov, Frederick Sorensen, T. Jensen, O. Tirosh
{"title":"Weight-Mate: Adaptive Training Support for Weight Lifting","authors":"J. Paay, J. Kjeldskov, Frederick Sorensen, T. Jensen, O. Tirosh","doi":"10.1145/3369457.3369466","DOIUrl":"https://doi.org/10.1145/3369457.3369466","url":null,"abstract":"Weightlifting is easy to learn, but difficult to master. People who do weightlifting do it to improve their health, strengthen their muscles and build their physique. However, due to the complex and precise body positioning required, even experienced weightlifters require assistance in perfecting their technique. At the same time, the training requirements of the individual change over time, as they perfect and hone their craft. To help weightlifters achieve optimum personal performance, we designed Weight-Mate, a prototype wearable system for giving weightlifters of different skill levels personalized, precise and non-distracting immediate feedback on how to correct their current body positioning during deadlift training. By iterating Weight-Mate using cooperative usability testing (CUT) with weightlifters of different competencies with their coaches we designed a system that could adapt to individual physiology and training needs. The Weight-Mate sensor suit maps the lifter's body configuration against the ideal deadlift position throughout all stages of the life, as defined by their coach, and provides non-intrusive feedback to the lifter to correct their body position. Our formative evaluation with ten weightlifters shows that an adaptive approach to digital weight training offers great promise in assisting weight lifters of all levels to improve their technique, and hence improve the safety of the sport.","PeriodicalId":258766,"journal":{"name":"Proceedings of the 31st Australian Conference on Human-Computer-Interaction","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127281544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploration of Passive Haptics Based Learning Support Method for Touch Typing","authors":"Rei Takakura, Kyohei Hakka, B. Shizuki","doi":"10.1145/3369457.3369524","DOIUrl":"https://doi.org/10.1145/3369457.3369524","url":null,"abstract":"Touch typing is a keyboard input method whereby the user types without looking at the keyboard. Because by touch typing users can keep looking at the screen, they can concentrate on what the screen shows and edit text. However, learning touch typing is difficult because it requires the memorization of all key placements. In this paper, we propose a passive haptics based learning support method for touch typing. Because the user is given a stimulus to the finger to be used for the next keystroke, our method encourages users to keep looking at the screen, helping them memorize all key placements. To investigate the effect of using our method, we conducted a pilot study with the users typing English phrases. As a result, our method has a possibility that the users can take less time to acquire touch typing.","PeriodicalId":258766,"journal":{"name":"Proceedings of the 31st Australian Conference on Human-Computer-Interaction","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116535133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"It's on the Cards: Designing Technology Instructions for People Living with Dementia","authors":"Erica Tandori, Jeanie Beh, Sonja Pedell","doi":"10.1145/3369457.3369502","DOIUrl":"https://doi.org/10.1145/3369457.3369502","url":null,"abstract":"This paper describes the creation of a set of dementia friendly instruction cards for participants engaged in a technology research project. The cards were designed to provide support after the research project was completed, so that participants could maintain continued and independent use of the technologies left with them. A literature review of guidelines for creating text, instructions and graphics for people with dementia is almost nonexistent. The iterative process of designing and refining easy to understand instructions, given both a lack of guidelines and the significant cognitive challenges presented by dementia, is described. We conclude that the creation of the cards themselves, and the insights we discovered in creating them are of significant value to assisting people in the uptake of technology. With clear, colourful, easily accessible and visually stimulating instructions, these cards act as a bridge between the technologies and their use, greatly enhancing the ability to support people with dementia in technology use.","PeriodicalId":258766,"journal":{"name":"Proceedings of the 31st Australian Conference on Human-Computer-Interaction","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122806288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Requirements for Adaptive User Interfaces for Industrial Maintenance Procedures: A discussion of context, requirements and research opportunities","authors":"Caitlin Woods, M. Hodkiewicz, T. French","doi":"10.1145/3369457.3369487","DOIUrl":"https://doi.org/10.1145/3369457.3369487","url":null,"abstract":"Maintenance work, executed by mechanics, electricians, welders and other tradespeople, is imperative for industrial equipment to operate as planned. To ensure high standards of work and safety, maintainers are required to comply with procedures that specify how tasks should be executed. In the past these procedures were paper-based but they are increasingly available digitally in the field on tablets and other devices. Maintainers work in variable and often challenging environmental contexts. The same work can be done inside a shop or in the field, day or night, cramped or open conditions. Often there is just one version of a procedure although one maintainer may have 30 years experience and another only 3 years. Additionally, maintainers themselves have individual differences resulting in different aptitudes for technology. Therefore, maintenance procedures are a suitable use-case for Adaptive User Interfaces (AUI). In this paper, we define six high-level requirements for an AUI for maintenance procedures and discuss opportunities for future research, particularly in the space of multi-target AUIs. This work is in its preliminary stages and could benefit from early discussion with the Human-Computer Interaction (HCI) community.","PeriodicalId":258766,"journal":{"name":"Proceedings of the 31st Australian Conference on Human-Computer-Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129702789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SNS and the Lived Experiences of Queer Youth","authors":"T. Armstrong, T. Leong","doi":"10.1145/3369457.3369497","DOIUrl":"https://doi.org/10.1145/3369457.3369497","url":null,"abstract":"Technology design has not adequately included a queer perspective, even though digital technologies such as social networking sites (SNS) have been shown to play vital roles in the lives and well-being of queer people. SNS provide queer people with a means to explore their identities, learn about queerness and connect to others with similar experiences. However, SNS use can also have detrimental effects, exposing queer people to harm and victimisation. To date, there is not much effort in HCI to understand the experiences of queer people with SNS. As a result, we lack understanding of how SNS and other social technologies could be designed in ways that are supportive of queer people's well-being. The findings from this exploratory study reveal how particular digital technologies can have complex effects in shaping queer people's experiences and their well-being.","PeriodicalId":258766,"journal":{"name":"Proceedings of the 31st Australian Conference on Human-Computer-Interaction","volume":"09 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124134771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Breathin","authors":"Rohan Hundia, Aaron Quigley","doi":"10.1145/3369457.3369536","DOIUrl":"https://doi.org/10.1145/3369457.3369536","url":null,"abstract":"New interaction modalities in human computer interaction often explore common sensory inputs including touch, voice, gesture or motion. However, these modalities are not inclusive of the entire population type, and cannot be utilized by a group of people who suffer from any limitation of that sensory input. Here we propose BreathIn: an interface tool for enabling interaction with computer applications by using discreet exhalation patterns. The intent is that such patterns can be issued by anyone who can breathe. Our concept is based on detecting a user's forced exhalation patterns in a time duration using a MEMS microphone placed below the user's nose. We breakdown the signal into FFT components and identify peak frequencies for forced voluntary \"breath events\" and use that in real-time to distinguish between \"exhalation events\" and noise. We show two major applications of such an interaction tool: a) adaptation of computer applications using breath, b) using the breath interface as a discreet, emergency signal for prospective victims of crime.","PeriodicalId":258766,"journal":{"name":"Proceedings of the 31st Australian Conference on Human-Computer-Interaction","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124081831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shunya Fujita, Naoki Yanagihara, B. Shizuki, Shin Takahashi
{"title":"Ray-Casting Based Interaction Using an Extended Pull-Out Gesture for Interactive Tabletops","authors":"Shunya Fujita, Naoki Yanagihara, B. Shizuki, Shin Takahashi","doi":"10.1145/3369457.3369528","DOIUrl":"https://doi.org/10.1145/3369457.3369528","url":null,"abstract":"In this paper, we present a method that allows the users to manipulate a remote object on tabletops using an extended pull-out gesture. The user performs the gesture by moving a ray-cast point projected from the dominant hand between two ray-cast points projected from the non-dominant hand. This allows the user to manipulate a remote object on a tabletop in a similar manner to manipulating a close object. We implemented a test system to the feasibility of our method and we showed applications to manipulate a remote object.","PeriodicalId":258766,"journal":{"name":"Proceedings of the 31st Australian Conference on Human-Computer-Interaction","volume":"12 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128143312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lithium Hindsight 360: Designing a process to create movement-based VR illness narratives","authors":"E. Kim, A. Crowe","doi":"10.1145/3369457.3369526","DOIUrl":"https://doi.org/10.1145/3369457.3369526","url":null,"abstract":"Illness narratives in the medical humanities have traditionally been text-based. For some patients, text may not be sufficient nor ideal for conveying their experiences to others. Lithium Hindsight 360 (LH360) is a virtual reality prototype intended to help bipolar disorder patients envision an alternative way of communicating their illness narrative to other individuals. Dance and somatic movement are used instead of text as a communication method to help non-patients understand the physical experience of having a mental illness. The creation process for the prototype will then lead to a set of design guidelines for patients interested in creating their own movement-based VR illness narrative. These guidelines are intended to help simplify a potentially complex and expensive process. In this demonstration, participants will be able to both view the prototype and walkthrough part of the design process.","PeriodicalId":258766,"journal":{"name":"Proceedings of the 31st Australian Conference on Human-Computer-Interaction","volume":"39 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131154007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryosuke Aoki, Kosuke Sato, Naoki Ohshima, N. Mukawa
{"title":"Buzz-Buzz Chat: Encouraging Parent-Child Communication with Multiple-Agent Chats","authors":"Ryosuke Aoki, Kosuke Sato, Naoki Ohshima, N. Mukawa","doi":"10.1145/3369457.3369489","DOIUrl":"https://doi.org/10.1145/3369457.3369489","url":null,"abstract":"Children who attend university or work for a company often live away from their parents and have few opportunities to communicate with them. This is especially true in Japanese culture, given that the importance of not expressing emotions openly, and respecting other people's privacy creates personal distance. However, communication plays an important role in maintaining familial relationships and ascertaining family members' health by sharing daily life information. In this paper, we design \"Buzz-Buzz Chat,\" an asynchronous communication system that encourages children and their parents to communicate about the children's cooking by using multiple-agent chats. Through a case study on a university student and his mother using Buzz-Buzz Chat, we find a novel method of remote communication with less burden, combining asynchronous dialogue about meals triggered by sharing cooking activities with dialogue about daily life and family in short intervals.","PeriodicalId":258766,"journal":{"name":"Proceedings of the 31st Australian Conference on Human-Computer-Interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132829642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Overlays and Goggles and Projections, Oh My!: Exploring Public Perceptions of Augmented Reality Technologies","authors":"Alexandra Thompson, L. Potter","doi":"10.1145/3369457.3369482","DOIUrl":"https://doi.org/10.1145/3369457.3369482","url":null,"abstract":"Augmented reality (AR) technologies have been available to the general public in varying formats for several years, but confusion remains about what AR actually is, and what it can do. This paper explores how well mental models of the general public align with the standing definitions of AR from an academic perspective. We also seek to understand whether individual experience with augmented reality technologies, or self-rated willingness to adopt new technologies, correlate with the accuracy of an individual's understanding of AR. A pilot survey asking participants to describe augmented reality revealed a variety of mental models, some of which aligned with academically defined characteristics of AR. The accuracy of the responses decreased in participants with no hands-on AR experience, and willingness to adopt new technology proved to have little to no influence on response accuracy. This paper presents some initial trends in public perceptions of augmented reality technologies, but also highlights the need for more research to establish a better understanding of mental models of AR.","PeriodicalId":258766,"journal":{"name":"Proceedings of the 31st Australian Conference on Human-Computer-Interaction","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128906982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}