Thilina Halloluwa, P. Bandara, Hakim Usoof, Dhaval Vyas
{"title":"Value for money: co-designing with underbanked women from rural Sri Lanka","authors":"Thilina Halloluwa, P. Bandara, Hakim Usoof, Dhaval Vyas","doi":"10.1145/3292147.3292157","DOIUrl":"https://doi.org/10.1145/3292147.3292157","url":null,"abstract":"This paper presents findings from a set of co-design workshops aimed at identifying the core values of underbanked women from rural Sri Lanka, associated with their everyday financial practices and experiences. We sought to gain in-depth insights into the aspirations, rationales, and concerns of this demography, where traditionally household finances are handled by men. In collaboration with a Microfinance Institute (MFI), we carried out two co-design workshops involving 17 participants. We used group discussions and various design activities such as persona creation to enable participants to share their experiences related to household finances. From our findings, three central values associated with household finances came out strongly: Supporting Family, Independence, and Spiritual Beliefs. We conclude by reflecting on the values identified while providing suggestions to support those through technology.","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116639524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Making the MOST out of smartphone opportunities for mental health","authors":"Simon D’Alfonso, N. Carpenter, M. Alvarez-Jimenez","doi":"10.1145/3292147.3292230","DOIUrl":"https://doi.org/10.1145/3292147.3292230","url":null,"abstract":"Modern smartphones come equipped with an array of sensors, and the data from these sensors can be collected to obtain contextual and behavioural information about the phone user. Furthermore, the interesting idea that analysed patterns in this data can indicate psychological conditions or states has gained attention in recent years. In this paper we discuss some incipient work on bringing therapy components from the MOST (moderated online social therapy) web platform into a mobile system that delivers personalised real-time therapy recommendations based on such smartphone sensing information.","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116665882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toshiyuki Ando, Toshiya Isomoto, B. Shizuki, Shin Takahashi
{"title":"Press & tilt: one-handed text selection and command execution on smartphone","authors":"Toshiyuki Ando, Toshiya Isomoto, B. Shizuki, Shin Takahashi","doi":"10.1145/3292147.3292178","DOIUrl":"https://doi.org/10.1145/3292147.3292178","url":null,"abstract":"We show a text selection and text command execution method for a smartphone by tilting called Press & Tilt. The user can perform caret navigation or text selection by tilting the smartphone while pressing a key of the software keyboard. Then, by releasing the pressed key, text commands such as copy, search, and translate based on the selected text is executed; the executed text command depends on the pressed key. Neither occlusion nor the fat finger problem is of concern, because our method can perform these operations without the need to have a finger touch the upper region of the touchscreen. Also, the user can execute text commands with only one-hand.","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117180309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ageing and making: a positive framing for human-computer interaction","authors":"Anna Kalma, Bernd Ploderer, Laurianne Sitbon","doi":"10.1145/3292147.3292181","DOIUrl":"https://doi.org/10.1145/3292147.3292181","url":null,"abstract":"Making is a generative and creative process that can be engaging and empowering. Craft groups and making communities have grown in popularity for the ageing population, both to pursue traditional making activities like knitting as well as to explore making through emerging interactive technologies. However, there is yet to be an overview of how older people engage in making. This paper seeks to map the landscape of making for older adults, in order to understand the characteristics and benefits of making for this demographic. We explore the literature on making and then use the lens of \"positive ageing\" to examine the benefits and challenges of making for health, security and participation of older people. We conclude by discussing opportunities for design and highlight avenues for future HCI research into supporting making for older adults.","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115095053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DigiView","authors":"Perrin Anto Jones, Jodie Clothier, Xueqing Jiang","doi":"10.1145/3292147.3292245","DOIUrl":"https://doi.org/10.1145/3292147.3292245","url":null,"abstract":"DigiView addresses the issue of the current inaccessibility of libraries digital collections and the fragmented user experience when exploring content across digital and physical spheres. Using an iterative design process including user journey maps, personas, roleplaying, prototyping and more to iterate a solution which best addresses the user's needs. The result was an interactive screen embedded into the bookshelves themselves, a natural extension to the physical space that allowed users to easily access the libraries digital collection. Beyond the screens users can also curate and share content to support instead of replace libraries as community hubs. DigiView effortlessly combines the physical with the digital sphere to enhance serendipitous discovery.","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125992275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of eye-tracking data with physiological signals for estimating level of understanding","authors":"Masaki Omata, Masaya Iuchi, Megumi Sakiyama","doi":"10.1145/3292147.3292233","DOIUrl":"https://doi.org/10.1145/3292147.3292233","url":null,"abstract":"We propose an e-learning content recommendation system that estimates a learner's level of understanding of a second language sentence. The system analyzes the eye-tracking data of a learner reading a text, and automatically selects the next text based on the estimation. This paper describes the system design and experimentally compares the estimation accuracies of two estimation methods (multiple regression and a neural network) and two kinds of learner-response data (eye-tracking data alone and both eye-tracking data and physiological signals). The neural network achieved higher accuracy than multiple regression, and eye-tracking data alone yielded the same or higher accuracy than the combined eye-tracking and physiological data. The average accuracy rate of the neural network using eye-tracking data was 67.86%.1","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121802604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pointing to targets with difference between motor and visual widths","authors":"Hiroki Usuba, Shota Yamanaka, Homei Miyashita","doi":"10.1145/3292147.3292150","DOIUrl":"https://doi.org/10.1145/3292147.3292150","url":null,"abstract":"In GUIs, there are clickable objects that have a difference between the motor and visual widths. For example, when looking at an item on a navigation bar, users think that the text length (the visual width) means the motor width. However, when a cursor hovers over the item, the cursor shape changes or the item is highlighted, and then users understand that the actual motor width differs from the visual width. In this study, we focus on the difference between the motor and visual widths and investigate how the difference affects user performance. Experimental results showed that 1) users aim at the motor width, 2) the reaction time is a U-shaped function whose optimal point is located where the motor and visual widths are the same, and 3) the movement time depends on the motor width. We also analyze existing GUIs and discuss the implications.","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131475947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos A. Arce-Lopera, Gilberto D. Avendaño, Brayan Rodríguez, D. Victoria
{"title":"In store shelf display technology for enhancing customer brand recognition","authors":"Carlos A. Arce-Lopera, Gilberto D. Avendaño, Brayan Rodríguez, D. Victoria","doi":"10.1145/3292147.3292186","DOIUrl":"https://doi.org/10.1145/3292147.3292186","url":null,"abstract":"Customer brand recognition is a critical factor in the retail environment. Here, we tested two prototypes with ambient technology and gamification elements in a store shelf display for enhancing customer brand recognition. The first prototype focused on showcasing the environmental labor of the company when the customer played a ball tossing game. The second prototype was a shelf display with a virtual reality experience based on the brand identity. Subjects tested both prototypes using a user engagement scale and results show that they were positively perceived in usability and enhance user engagement, brand recall and brand perception.","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"447 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126445479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Herbert, M. Billinghurst, A. Weerasinghe, Barrett Ens, G. Wigley
{"title":"A generalized, rapid authoring tool for intelligent tutoring systems","authors":"B. Herbert, M. Billinghurst, A. Weerasinghe, Barrett Ens, G. Wigley","doi":"10.1145/3292147.3292202","DOIUrl":"https://doi.org/10.1145/3292147.3292202","url":null,"abstract":"As computer-based training systems become increasingly integrated into real-world training, tools which rapidly author courses for such systems are emerging. However, inconsistent user interface design and limited support for a variety of domains makes them time consuming and difficult to use. We present a Generalized, Rapid Authoring Tool (GRAT), which simplifies creation of Intelligent Tutoring Systems (ITSs) using a unified web-based wizard-style graphical user interface and programming-by-demonstration approaches to reduce technical knowledge needed to author ITS logic. We implemented a prototype, which authors courses for two kinds of tasks: A network cabling task and a console device configuration task to demonstrate the tool's potential. We describe the limitations of our prototype and present opportunities for evaluating the tool's usability and perceived effectiveness.","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128366456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laurianne Sitbon, Maria Hoogstrate, Julia Yule, Stewart Koplick, Filip Bircanin, M. Brereton
{"title":"A non-clinical approach to describing participants with intellectual disability","authors":"Laurianne Sitbon, Maria Hoogstrate, Julia Yule, Stewart Koplick, Filip Bircanin, M. Brereton","doi":"10.1145/3292147.3292206","DOIUrl":"https://doi.org/10.1145/3292147.3292206","url":null,"abstract":"Despite mounting evidence that standardised tests and diagnoses are often not appropriate to recruit and describe participants with intellectual disability while acknowledging their diversity, designers have few tools to describe their participants when reporting in academic literature. More importantly, most clinical language about intellectual disability is neither owned nor mastered by the people to whom it refers. This paper proposes an approach that integrates the executive function framework, as used and understood by practitioners, with the Instrumental Activities of Daily Living (IADL), as experienced and understood by people with intellectual disability, into a set of questions in relation to support. We discuss the applicability of our proposed approach, broadly and through the lens of reflections on a small case study.","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134418467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}