Yashar Deldjoo, Mehdi Elahi, P. Cremonesi, F. Garzotto, P. Piazzolla
{"title":"Recommending Movies Based on Mise-en-Scene Design","authors":"Yashar Deldjoo, Mehdi Elahi, P. Cremonesi, F. Garzotto, P. Piazzolla","doi":"10.1145/2851581.2892551","DOIUrl":"https://doi.org/10.1145/2851581.2892551","url":null,"abstract":"In this paper, we present an ongoing work that will ultimately result in a movie recommender system based on the Mise-en-Scène characteristics of the movies. We believe that the preferences of users on movies can be well described in terms of the mise-en-scène, i.e., the design aspects of movie making influencing aesthetic and style. Examples of mise-en-scène characteristics are lighting, colors, background, and movements. Our recommender system opens new opportunities in the design of new user interfaces able to offer a personalized way to search for interesting movies through the analysis of film styles rather than using the traditional classifications of movies based on explicit attributes such as genre and cast.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129483399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Laakso, Tuomas Husu, Mikko Romppainen, Janina Fagerlund, M. Kettunen, Toni Standell
{"title":"User Interface Design In Agile Projects","authors":"K. Laakso, Tuomas Husu, Mikko Romppainen, Janina Fagerlund, M. Kettunen, Toni Standell","doi":"10.1145/2851581.2856687","DOIUrl":"https://doi.org/10.1145/2851581.2856687","url":null,"abstract":"In this enhanced version of our well-received tutorial in NordiCHI'14 we will teach the way we design UIs at Reaktor and share our lessons learned from more than 10 years of design in agile projects. No previous knowledge of UI design is required, but the participants should know at least the basics of agile development in order to follow the examples and discussion in the second part. The course has two parts: First we will teach how to create straightforward UI designs in a systematical fashion. This part focuses on demos and hands-on exercises with a minimal amount of theory, talk and slides. It is based on the GUIDe method and UI design courses that have been taught to hundreds of students at the University of Helsinki and developed further at Reaktor. In the second part we will present our current state of the art in combining design activities (conceptual design, UI design, graphics design, ...) with agile development. In the past 10 years, we have tried out many different approaches. We will show practical examples of real projects with their results and illustrate, what practices worked, what did not and why. This part is an interactive lesson -- the participants are most welcome to ask a lot of questions during the session. The instructors are all designers at Reaktor, a Finland-based software consultancy with offices in New York, Tokyo and Helsinki. Most of them have been teaching at University of Helsinki, Dept. of Computer Science. Today, they make sure that the software built at Reaktor solves meaningful and financially viable problems. In practice, they find out what parts of the users' work would benefit most of software support and draw straightforward UI solutions for them.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128226291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TwitchViz: A Visualization Tool for Twitch Chatrooms","authors":"Rui Pan, L. Bartram, Carman Neustaedter","doi":"10.1145/2851581.2892427","DOIUrl":"https://doi.org/10.1145/2851581.2892427","url":null,"abstract":"Twitch.tv is a flagship platform for live game streaming between players and viewers. It allows players to broadcast their gameplay to a public audience where viewers chat with each other and discuss gameplay. Current tools for analyzing live game streaming and chat rooms are limited. In this paper, we describe the design of TwitchViz: a new visualization tool with the goal of helping both players and game designers to better understand the relationship between gameplay and Twitch viewers' chatting behaviors. An initial feasibility study showed that TwitchViz supports novel ways to get an insight of gameplay issues from the patterns of chatting behaviors of viewers and highlighted design issues to address in subsequent versions of the tool.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128294748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy-Enhancing of User's Behaviour Toward Privacy Settings in Social Networking Sites","authors":"Abdulhadi Alqarni, S. Sampalli","doi":"10.1145/2851581.2892508","DOIUrl":"https://doi.org/10.1145/2851581.2892508","url":null,"abstract":"Social Networking Sites (SNSs) are applications that allow users to create personal profiles to interact with friends or public and to share data such as photos and short videos. The amount of these personal disclosures has raised issues and concerns regarding SNSs' privacy. Users' attitudes toward privacy and their sharing behaviours are inconsistent because they are concerned about privacy, but continue sharing personal information. Also, the existing privacy settings are not flexible enough to prevent privacy risks. In this paper, we propose a novel model called Privacy Settings Model (PSM) that can lead users to understand, control, and update SNSs' privacy settings. We believe that this model will enhance their privacy behaviours toward SNSs' privacy settings and reduce privacy risks.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128601478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Finger Placement and Hand Grasp during Smartphone Interaction","authors":"Huy Viet Le, Sven Mayer, Katrin Wolf, N. Henze","doi":"10.1145/2851581.2892462","DOIUrl":"https://doi.org/10.1145/2851581.2892462","url":null,"abstract":"Smartphones are currently the most successful mobile devices. Through their touchscreens, they combine input and output in a single interface. A body of work investigated interaction beyond direct touch. In particular, previous work proposed using the device's rear as an interaction surface and the grip of the hands that hold the device as a means of input. While previous work provides a categorization of grip styles, a detailed understanding of the preferred fingers' position during different tasks is missing. This understanding is needed to develop ergonomic grasp-based and Back-of-Device interaction techniques. We report from a study to understand users' finger position during three representative tasks. We highlight the areas that are already covered by the users' hands while using the on-screen keyboard, reading a text, and watching a video. Furthermore, we present the position of each of the user's fingers during these tasks. From the results, we derive interaction possibilities from an ergonomic perspective.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128604270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefano Piana, Paolo Alborno, Radoslaw Niewiadomski, M. Mancini, G. Volpe, A. Camurri
{"title":"Movement Fluidity Analysis Based on Performance and Perception","authors":"Stefano Piana, Paolo Alborno, Radoslaw Niewiadomski, M. Mancini, G. Volpe, A. Camurri","doi":"10.1145/2851581.2892478","DOIUrl":"https://doi.org/10.1145/2851581.2892478","url":null,"abstract":"In this work we present a framework and an experimental approach to investigate human body movement qualities (i.e., the expressive components of non-verbal communication) in HCI. We first define a candidate movement quality conceptually, with the involvement of experts in the field (e.g., dancers, choreographers). Next, we collect a dataset of performances and we evaluate the perception of the chosen quality. Finally, we propose a computational model to detect the presence of the quality in a movement segment and we compare the outcomes of the model with the evaluation results. In the proposed on-going work, we apply this approach to a specific quality of movement: Fluidity. The proposed methods and models may have several applications, e.g., in emotion detection from full-body movement, interactive training of motor skills, rehabilitation.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128678710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards the Creation of Interspecies Digital Games: An Observational Study on Cats' Interest in Interactive Technologies","authors":"P. Pons, J. Martínez","doi":"10.1145/2851581.2892381","DOIUrl":"https://doi.org/10.1145/2851581.2892381","url":null,"abstract":"There is growing interest in developing playful experiences for animals within the field of Animal-Computer Interaction (ACI). These digital games aim to improve animals' wellbeing and provide them with enriching activities. However, little research has been conducted to analyze the factors and stimuli that could lead animals to engage with a specific game. These factors could vary among different animal species, or even between individuals of the same species. Identifying the most appropriate artifacts to attract the attention of an animal species would help in the development of engaging playful activities for them. This paper describes early findings of an observational study on cats, which evaluated their interest in different kinds of technologically-based stimuli and interaction modalities. This study and further exploration of its results would inform the development of suitable and engaging playful experiences for cats.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129574905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data Edibilization: Representing Data with Food","authors":"Yun Wang, Xiaojuan Ma, Qiong Luo, Huamin Qu","doi":"10.1145/2851581.2892570","DOIUrl":"https://doi.org/10.1145/2851581.2892570","url":null,"abstract":"Data communication is critical in data science. We propose data edibilization, i.e., encoding data with edible materials, as a novel approach to leverage multiple sensory channels to convey data stories. We conduct a preliminary data tasting workshop to explore how users interact with and interpret data edibilization. Based on the participants' feedback, we summarize the advantages of edibilization in terms of attractiveness, richness, memorability, affectiveness, and sociability. We also identify several challenges with data edibilization. We discuss possible pragmatic processes, enabling technologies, and potential research opportunities to provide insights into the design space of data edibilization and its practicality.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127077277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Social Communication Skills Using Kinesics Feedback","authors":"R. Barmaki","doi":"10.1145/2851581.2890378","DOIUrl":"https://doi.org/10.1145/2851581.2890378","url":null,"abstract":"Interactive training environments typically include feedback mechanisms designed to help trainees improve their performance through guided or self-reflection. When the training system deals with human-to-human communications, as one would find in a teacher, counselor or cross-cultural trainer, such feedback needs to focus on all aspects of human communication. This means that, in addition to verbal communication, nonverbal messages (kinesics in particular) must be captured and analyzed for semantic meaning. The goal of this research is to introduce interactive training models developed to improve human-to-human interaction. The specific context in which we prototype and validate these models is the TeachLivE teacher rehearsal environment developed at the University of Central Florida. We implemented an online gesture recognition application on top of the Microsoft Kinect software development kit with multiple feedback channels including visual and haptics. In a study of twelve participants rehearsing a teaching session in TeachLivE, we found that the online gesture recognition tool and its associated feedback method are effective and non-intrusive approaches for the purpose of communication-skill training. The algorithms employed, the results, and the implications for other interactive contexts are discussed in this paper.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129982460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rainforest: An Interactive Ecosystem","authors":"P. Beyls, A. Perrotta","doi":"10.1145/2851581.2891090","DOIUrl":"https://doi.org/10.1145/2851581.2891090","url":null,"abstract":"This paper describes a self-regulating artificial ecosystem in continuous exposure to human observers. Particles of variable morphology engage in local interaction and give rise to emergent overall audiovisual complexity. People only exercise influence over autonomous behavior developing in the artificial world. A machine-learning algorithm basically aims to maximize audiovisual diversity by tracking changes in systems behavior in relation to behavior in the artificial world. We suggest rewarding human-machine interaction to exist in the elaboration of dynamic relationships between spatial and cognitive human behavior and audiovisual performance in an artificial universe.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129102735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}