{"title":"“I Hear You”: Understanding Awareness Information Exchange in an Audio-only Workspace","authors":"Oussama Metatla, N. Bryan-Kinns, T. Stockman","doi":"10.1145/3173574.3174120","DOIUrl":"https://doi.org/10.1145/3173574.3174120","url":null,"abstract":"Graphical displays are a typical means for conveying awareness information in groupware systems to help users track joint activities, but are not ideal when vision is constrained. Understanding how people maintain awareness through non-visual means is crucial for designing effective alternatives for supporting awareness in such situations. We present a lab study simulating an extreme scenario where 32 pairs of participants use an audio-only tool to edit shared audio menus. Our aim is to characterise collaboration in this audio-only space in order to identify whether and how, by itself, audio can mediate collaboration. Our findings show that the means for audio delivery and choice of working styles in this space influence types and patterns of awareness information exchange. We thus highlight the need to accommodate different working styles when designing audio support for awareness, and extend previous research by identifying types of awareness information to convey in response to group work dynamics.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80912561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Viewing Multiple Viewpoint Videos on Metacognition of Collaborative Experiences","authors":"Y. Sumi, M. Suwa, Koichi Hanaue","doi":"10.1145/3173574.3174222","DOIUrl":"https://doi.org/10.1145/3173574.3174222","url":null,"abstract":"This paper discusses the effects of multiple viewpoint videos for metacognition of experiences. We present a system for recording multiple users' collaborative experiences by wearable and environmental sensors, and another system for viewing multiple viewpoint videos automatically identified and extracted to associate to individual users. We designed an experiment to compare the metacognition of one's own experience between those based on memory and those supported by video viewing. The experimental results show that metacognitive descriptions related to one's own mind, such as feelings and preferences, are possible regardless whether a person is viewing videos, but such episodic descriptions as the content of someone's utterance and what s/he felt associated with it are strongly promoted by video viewing. We conducted another experiment where the same participants did identical metacognitive description tasks about half a year after the previous experiment. Through the experiments, we found the first-person view video is mostly used for confirming the episodic facts immediately after the experience, whereas after half a year, even one's own experience is often felt like the experiences of others therefore the videos capturing themselves from the conversation partners and environment become important for thinking back to the situations where they were placed.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85545476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mental Health Support and its Relationship to Linguistic Accommodation in Online Communities","authors":"Eva Sharma, M. Choudhury","doi":"10.1145/3173574.3174215","DOIUrl":"https://doi.org/10.1145/3173574.3174215","url":null,"abstract":"Many online communities cater to the critical and unmet needs of individuals challenged with mental illnesses. Generally, communities engender characteristic linguistic practices, known as norms. Conformance to these norms, or linguistic accommodation, encourages social approval and acceptance. This paper investigates whether linguistic accommodation impacts a specific social feedback: the support received by an individual in an online mental health community. We first quantitatively derive two measures for each post in these communities: 1) the linguistic accommodation it exhibits, and 2) the level of support it receives. Thereafter, we build a statistical framework to examine the relationship between these measures. Although the extent to which accommodation is associated with support varies, we find a positive link between the two, consistent across 55 Reddit communities serving various psychological needs. We discuss how our work surfaces a tension in the functioning of these sensitive communities, and present design implications for improving their support provisioning mechanisms.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85910997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Morrison, Xiaoyu Xiong, Matthew Higgs, Marek Bell, M. Chalmers
{"title":"A Large-Scale Study of iPhone App Launch Behaviour","authors":"A. Morrison, Xiaoyu Xiong, Matthew Higgs, Marek Bell, M. Chalmers","doi":"10.1145/3173574.3173918","DOIUrl":"https://doi.org/10.1145/3173574.3173918","url":null,"abstract":"There have been many large-scale investigations of users' mobile app launch behaviour, but all have been conducted on Android, even though recent reports suggest iPhones account for a third of all smartphones in use. We report on the first large-scale analysis of app usage patterns on iPhones. We conduct a reproduction study with a cohort of over 10,000 jailbroken iPhone users, reproducing several studies previously conducted on Android devices. We find some differences, but also significant similarities: e.g. communications apps are the most used on both platforms; similar patterns are apparent of few apps being very popular but there existing a 'long tail' of many apps used by the population; users show similar patterns of 'micro-usage'; almost identical proportions of people use a unique combination of apps. Such similarities add confidence but also specificity about claims of consistency across smartphones. As well as presenting our findings, we discuss issues involved in reproducing studies across platforms.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"87 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84047481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingkun Su, Xue Bai, Hongbo Fu, Chiew-Lan Tai, Jue Wang
{"title":"Live Sketch: Video-driven Dynamic Deformation of Static Drawings","authors":"Qingkun Su, Xue Bai, Hongbo Fu, Chiew-Lan Tai, Jue Wang","doi":"10.1145/3173574.3174236","DOIUrl":"https://doi.org/10.1145/3173574.3174236","url":null,"abstract":"Creating sketch animations using traditional tools requires special artistic skills, and is tedious even for trained professionals. To lower the barrier for creating sketch animations, we propose a new system, emphLive Sketch,</i> which allows novice users to interactively bring static drawings to life by applying deformation-based animation effects that are extracted from video examples. Dynamic deformation is first extracted as a sparse set of moving control points from videos and then transferred to a static drawing. Our system addresses a few major technical challenges, such as motion extraction from video, video-to-sketch alignment, and many-to-one motion-driven sketch animation. While each of the sub-problems could be difficult to solve fully automatically, we present reliable solutions by combining new computational algorithms with intuitive user interactions. Our pilot study shows that our system allows both users with or without animation skills to easily add dynamic deformation to static drawings.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77794552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Rietzler, Florian Geiselhart, Julian Frommel, E. Rukzio
{"title":"Conveying the Perception of Kinesthetic Feedback in Virtual Reality using State-of-the-Art Hardware","authors":"Michael Rietzler, Florian Geiselhart, Julian Frommel, E. Rukzio","doi":"10.1145/3173574.3174034","DOIUrl":"https://doi.org/10.1145/3173574.3174034","url":null,"abstract":"Including haptic feedback in current consumer VR applications is frequently challenging, since technical possibilities to create haptic feedback in consumer-grade VR are limited. While most systems include and make use of the possibility to create tactile feedback through vibration, kinesthetic feedback systems almost exclusively rely on external mechanical hardware to induce actual sensations so far. In this paper, we describe an approach to create a feeling of such sensations by using unmodified off-the-shelf hardware and a software solution for a multi-modal pseudo-haptics approach. We first explore this design space by applying user-elicited methods, and afterwards evaluate our refined solution in a user study. The results show that it is indeed possible to communicate kinesthetic feedback by visual and tactile cues only and even induce its perception. While visual clipping was generally unappreciated, our approach led to significant increases of enjoyment and presence.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"137 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72805505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Blythe, Enrique Encinas, Jofish Kaye, M. Avery, R. McCabe, Kristina Andersen
{"title":"Imaginary Design Workbooks: Constructive Criticism and Practical Provocation","authors":"M. Blythe, Enrique Encinas, Jofish Kaye, M. Avery, R. McCabe, Kristina Andersen","doi":"10.1145/3173574.3173807","DOIUrl":"https://doi.org/10.1145/3173574.3173807","url":null,"abstract":"his paper reports on design strategies for critical and experimental work that remains constructive. We report findings from a design workshop that explored the \"home hub\" space through \"imaginary design workbooks\". These feature ambiguous images and annotations written in an invented language to suggest a design space without specifying any particular idea. Many of the concepts and narratives which emerged from the workshop focused on extreme situations: some thoughtful, some dystopian, some even mythic. One of the workshop ideas was then developed with a senior social worker who works with young offenders. A \"digital social worker\" concept was developed and critiqued simultaneously. We draw on Foucault's history of surveillance to \"defamiliarise\" both the home hub technology and the current youth justice system. We argue that the dichotomy between \"constructive\" and \"critical\" design is false because design is never neutral.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79904207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Scassellati, Jake Brawer, K. Tsui, Setareh Nasihati Gilani, Melissa Malzkuhn, Barbara Manini, Adam Stone, Geo Kartheiser, A. Merla, Ari Shapiro, D. Traum, L. Petitto
{"title":"Teaching Language to Deaf Infants with a Robot and a Virtual Human","authors":"B. Scassellati, Jake Brawer, K. Tsui, Setareh Nasihati Gilani, Melissa Malzkuhn, Barbara Manini, Adam Stone, Geo Kartheiser, A. Merla, Ari Shapiro, D. Traum, L. Petitto","doi":"10.1145/3173574.3174127","DOIUrl":"https://doi.org/10.1145/3173574.3174127","url":null,"abstract":"Children with insufficient exposure to language during critical developmental periods in infancy are at risk for cognitive, language, and social deficits [55]. This is especially difficult for deaf infants, as more than 90% are born to hearing parents with little sign language experience [48]. We created an integrated multi-agent system involving a robot and virtual human designed to augment language exposure for 6-12 month old infants. Human-machine design for infants is challenging, as most screen-based media are unlikely to support learning in [33]. While presently, robots are incapable of the dexterity and expressiveness required for signing, even if it existed, developmental questions remain about the capacity for language from artificial agents to engage infants. Here we engineered the robot and avatar to provide visual language to effect socially contingent human conversational exchange. We demonstrate the successful engagement of our technology through case studies of deaf and hearing infants.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76685071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Herman Saksono, C. Castaneda-Sceppa, Jessica A. Hoffman, M. S. El-Nasr, V. Morris, A. Parker
{"title":"Family Health Promotion in Low-SES Neighborhoods: A Two-Month Study of Wearable Activity Tracking","authors":"Herman Saksono, C. Castaneda-Sceppa, Jessica A. Hoffman, M. S. El-Nasr, V. Morris, A. Parker","doi":"10.1145/3173574.3173883","DOIUrl":"https://doi.org/10.1145/3173574.3173883","url":null,"abstract":"Low-socioeconomic status (SES) families face increased barriers to physical activity (PA)-a behavior critical for reducing and preventing chronic disease. Research has explored how wearable PA trackers can encourage increased activity, and how the adoption of such trackers is driven by people's emotions and social needs. However, more work is needed to understand how PA trackers are perceived and adopted by low-SES families, where PA may be deprioritized due to economic stresses, limited resources, and perceived crime. Accordingly, we conducted a two-month, in-depth qualitative study, exploring low-SES caregivers' perspectives on PA tracking and promotion. Our findings show how PA tracking was impacted by caregivers' attitudes toward safety, which were influenced by how they perceived social connections within their neighborhoods; and cognitive-emotional processes. We conclude that PA tracking tools for low-SES families should help caregivers and children to experience and celebrate progress.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82445133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding the Effect of In-Video Prompting on Learners and Instructors","authors":"Hyungyu Shin, Eun-Young Ko, J. Williams, Juho Kim","doi":"10.1145/3173574.3173893","DOIUrl":"https://doi.org/10.1145/3173574.3173893","url":null,"abstract":"Online instructional videos are ubiquitous, but it is difficult for instructors to gauge learners' experience and their level of comprehension or confusion regarding the lecture video. Moreover, learners watching the videos may become disengaged or fail to reflect and construct their own understanding. This paper explores instructor and learner perceptions of in-video prompting where learners answer reflective questions while watching videos. We conducted two studies with crowd workers to understand the effect of prompting in general, and the effect of different prompting strategies on both learners and instructors. Results show that some learners found prompts to be useful checkpoints for reflection, while others found them distracting. Instructors reported the collected responses to be generally more specific than what they have usually collected. Also, different prompting strategies had different effects on the learning experience and the usefulness of responses as feedback.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82599759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}