{"title":"How do designers feel textiles?","authors":"B. Petreca, S. Baurley, N. Bianchi-Berthouze","doi":"10.1109/ACII.2015.7344695","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344695","url":null,"abstract":"Studying tactile experience is important and timely, considering how this channel is being harnessed both in terms of human interaction and for technological developments that rely on it to enhance experience of products and services. Research into tactile experience to date is present mostly within the social context, but there are not many studies on the understanding of tactile experience in interaction with objects. In this paper, we use textiles as a case study to investigate how we can get people to talk about this experience, and to understand what may be important to consider when designing technology to support it. We present a qualitative exploratory study using the `Elicitation Interview' method to obtain a first-person verbal description of experiential processes. We conducted an initial study with 6 experienced professionals from the fashion and textiles area. The analysis revealed that there are two types of touch behaviour in experiencing textiles, active and passive, which happen through `Active hand', `Passive body' and `Active tool-hand'. They can occur in any order, and with different degrees of importance and frequency in the 3 tactile-based phases of the textile selection process - `Situate', `Simulate' and `Stimulate' - and the interaction has different modes in each. We discuss these themes to inform the design of technology for affective touch in the textile field, but also to explore a methodology to uncover the complexity of affective touch and its various purposes.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"36 1","pages":"982-987"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87246422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of a wearable research tool for warm mediated social touches","authors":"Isabel Pfab, Christian J. A. M. Willemse","doi":"10.1109/ACII.2015.7344694","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344694","url":null,"abstract":"Social touches are essential in interpersonal communication, for instance to show affect. Despite this importance, mediated interpersonal communication oftentimes lacks the possibility to touch. A human touch is a complex composition of several physical qualities and parameters, but different haptic technologies allow us to isolate such parameters and to investigate their opportunities and limitations for affective communication devices. In our research, we focus on the role that temperature may play in affective mediated communication. In the current paper, we describe the design of a wearable `research tool' that will facilitate systematic research on the possibilities of temperature in affective communication. We present use cases, and define a list of requirements accordingly. Based on a requirement fulfillment analysis, we conclude that our research tool can be of value for research on new forms of affective mediated communication.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"550 1","pages":"976-981"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77140928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meshia Cédric Oveneke, Isabel Gonzalez, Weiyi Wang, D. Jiang, H. Sahli
{"title":"Monocular 3D facial information retrieval for automated facial expression analysis","authors":"Meshia Cédric Oveneke, Isabel Gonzalez, Weiyi Wang, D. Jiang, H. Sahli","doi":"10.1109/ACII.2015.7344634","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344634","url":null,"abstract":"Understanding social signals is a very important aspect of human communication and interaction and has therefore attracted increased attention from various research areas. Among the different types of social signals, particular attention has been paid to facial expression of emotions and its automated analysis from image sequences. Automated facial expression analysis is a very challenging task due to the complex three-dimensional deformation and motion of the face associated to the facial expressions and the loss of 3D information during the image formation process. As a consequence, retrieving 3D spatio-temporal facial information from image sequences is essential for automated facial expression analysis. In this paper, we propose a framework for retrieving three-dimensional facial structure, motion and spatio-temporal features from monocular image sequences. First, we estimate monocular 3D scene flow by retrieving the facial structure using shape-from-shading (SFS) and combine it with 2D optical flow. Secondly, based on the retrieved structure and motion of the face, we extract spatio-temporal features for automated facial expression analysis. Experimental results illustrate the potential of the proposed 3D facial information retrieval framework for facial expression analysis, i.e. facial expression recognition and facial action-unit recognition on a benchmark dataset. This paves the way for future research on monocular 3D facial expression analysis.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"15 1","pages":"623-629"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77674028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sabrina Campano, Caroline Langlet, N. Glas, C. Clavel, C. Pelachaud
{"title":"An ECA expressing appreciations","authors":"Sabrina Campano, Caroline Langlet, N. Glas, C. Clavel, C. Pelachaud","doi":"10.1109/ACII.2015.7344691","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344691","url":null,"abstract":"In this paper, we propose a computational model that provides an Embodied Conversational Agent (ECA) with the ability to generate verbal other-repetition (repetitions of some of the words uttered in the previous user speaker turn) when interacting with a user in a museum setting. We focus on the generation of other-repetitions expressing emotional stances in appreciation sentences. Emotional stances and their semantic features are selected according to the user's verbal input, and ECA's utterance is generated according to these features. We present an evaluation of this model through users' subjective reports. Results indicate that the expression of emotional stances by the ECA has a positive effect oIn this paper, we propose a computational model that provides an Embodied Conversational Agent (ECA) with the ability to generate verbal other-repetition (repetitions of some of the words uttered in the previous user speaker turn) when interacting with a user in a museum setting. We focus on the generation of other-repetitions expressing emotional stances in appreciation sentences. Emotional stances and their semantic features are selected according to the user's verbal input, and ECA's utterance is generated according to these features. We present an evaluation of this model through users' subjective reports. Results indicate that the expression of emotional stances by the ECA has a positive effect on user engagement, and that ECA's behaviours are rated as more believable by users when the ECA utters other-repetitions.n user engagement, and that ECA's behaviours are rated as more believable by users when the ECA utters other-repetitions.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"110 1","pages":"962-967"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85273604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inducing an ironic effect in automated tweets","authors":"A. Valitutti, T. Veale","doi":"10.1109/ACII.2015.7344565","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344565","url":null,"abstract":"Irony gives us a way to react creatively to disappointment. By allowing us to speak of a failed expectation as though it succeeded, irony stresses the naturalness of our expectation and the absurdity of its failure. The result of this playful use of language is a subtle valence shift as listeners are alerted to a gap between what is said and what is meant. But as irony is not without risks, speakers are often careful to signal an ironic intent with tone, body language, or if on Twitter, with the hashtag #irony. Yet given the subtlety of irony, we question the effectiveness of explicit marking, and empirically show how a stronger valence shift can be induced in automatically-generated creative tweets with more nuanced signals of irony.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"36 1","pages":"153-159"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89311633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andra Adams, M. Mahmoud, T. Baltrušaitis, P. Robinson
{"title":"Decoupling facial expressions and head motions in complex emotions","authors":"Andra Adams, M. Mahmoud, T. Baltrušaitis, P. Robinson","doi":"10.1109/ACII.2015.7344583","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344583","url":null,"abstract":"Perception of emotion through facial expressions and head motion is of interest to both psychology and affective computing researchers. However, very little is known about the importance of each modality individually, as they are often treated together rather than separately. We present a study which isolates the effect of head motion from facial expression in the perception of complex emotions in videos. We demonstrate that head motions carry emotional information that is complementary rather than redundant to the emotion content in facial expressions. Finally, we show that emotional expressivity in head motion is not limited to nods and shakes and that additional gestures (such as head tilts, raises and general amount of motion) could be beneficial to automated recognition systems.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"21 1","pages":"274-280"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86097278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Region-based image retrieval based on medical media data using ranking and multi-view learning","authors":"Wei Huang, Shuru Zeng, Guang Chen","doi":"10.1109/ACII.2015.7344672","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344672","url":null,"abstract":"In this study, a novel region-based image retrieval approach via ranking and multi-view learning techniques is introduced for the first time based on medical multi-modality data. A surrogate ranking evaluation measure is derived, and direct optimization via gradient ascent is carried out based on the surrogate measure to realize ranking and learning. A database composed of 1000 real patients data is constructed and several popular pattern recognition methods are implemented for performance evaluation compared with ours. It is suggested that our new method is superior to others in this medical image retrieval utilization from the statistical point of view.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"50 1","pages":"845-850"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86114717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Engagement: A traceable motivational concept in human-robot interaction","authors":"Karl Drejing, Serge Thill, Paul E. Hemeren","doi":"10.1109/ACII.2015.7344690","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344690","url":null,"abstract":"Engagement is essential to meaningful social interaction between humans. Understanding the mechanisms by which we detect engagement of other humans can help us understand how we can build robots that interact socially with humans. However, there is currently a lack of measurable engagement constructs on which to build an artificial system that can reliably support social interaction between humans and robots. This paper proposes a definition, based on motivation theories, and outlines a framework to explore the idea that engagement can be seen as specific behaviors and their attached magnitude or intensity. This is done by the use of data from multiple sources such as observer ratings, kinematic data, audio and outcomes of interactions. We use the domain of human-robot interaction in order to illustrate the application of this approach. The framework further suggests a method to gather and aggregate this data. If certain behaviors and their attached intensities co-occur with various levels of judged engagement, then engagement could be assessed by this framework consequently making it accessible to a robotic platform. This framework could improve the social capabilities of interactive agents by adding the ability to notice when and why an agent becomes disengaged, thereby providing the interactive agent with an ability to reengage him or her. We illustrate and propose validation of our framework with an example from robot-assisted therapy for children with autism spectrum disorder. The framework also represents a general approach that can be applied to other social interactive settings between humans and robots, such as interactions with elderly people.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"7 1","pages":"956-961"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86910778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving emotion classification on Chinese microblog texts with auxiliary cross-domain data","authors":"Huimin Wu, Qin Jin","doi":"10.1109/ACII.2015.7344668","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344668","url":null,"abstract":"Emotion classification for microblog texts has wide applications such as in social security and business marketing areas. The amount of annotated microblog texts is very limited. In this paper, we therefore study how to utilize annotated data from other domains (source domain) to improve emotion classification on microblog texts (target domain). Transfer learning has been a successful approach for cross domain learning. However, to the best of our knowledge, little attention has been paid for automatically selecting the appropriate samples from the source domain before applying transfer learning. In this paper, we propose an effective framework to sampling available data in the source domain before transfer learning, which we name as Two-Stage Sampling. The improvement of emotion classification on Chinese microblog texts demonstrates the effectiveness of our approach.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"75 1","pages":"821-826"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84339285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition","authors":"Yelin Kim","doi":"10.1109/ACII.2015.7344653","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344653","url":null,"abstract":"My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"16 1","pages":"748-753"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84193534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}