{"title":"How do designers feel textiles?","authors":"B. Petreca, S. Baurley, N. Bianchi-Berthouze","doi":"10.1109/ACII.2015.7344695","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344695","url":null,"abstract":"Studying tactile experience is important and timely, considering how this channel is being harnessed both in terms of human interaction and for technological developments that rely on it to enhance experience of products and services. Research into tactile experience to date is present mostly within the social context, but there are not many studies on the understanding of tactile experience in interaction with objects. In this paper, we use textiles as a case study to investigate how we can get people to talk about this experience, and to understand what may be important to consider when designing technology to support it. We present a qualitative exploratory study using the `Elicitation Interview' method to obtain a first-person verbal description of experiential processes. We conducted an initial study with 6 experienced professionals from the fashion and textiles area. The analysis revealed that there are two types of touch behaviour in experiencing textiles, active and passive, which happen through `Active hand', `Passive body' and `Active tool-hand'. They can occur in any order, and with different degrees of importance and frequency in the 3 tactile-based phases of the textile selection process - `Situate', `Simulate' and `Stimulate' - and the interaction has different modes in each. We discuss these themes to inform the design of technology for affective touch in the textile field, but also to explore a methodology to uncover the complexity of affective touch and its various purposes.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"36 1","pages":"982-987"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87246422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andra Adams, M. Mahmoud, T. Baltrušaitis, P. Robinson
{"title":"Decoupling facial expressions and head motions in complex emotions","authors":"Andra Adams, M. Mahmoud, T. Baltrušaitis, P. Robinson","doi":"10.1109/ACII.2015.7344583","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344583","url":null,"abstract":"Perception of emotion through facial expressions and head motion is of interest to both psychology and affective computing researchers. However, very little is known about the importance of each modality individually, as they are often treated together rather than separately. We present a study which isolates the effect of head motion from facial expression in the perception of complex emotions in videos. We demonstrate that head motions carry emotional information that is complementary rather than redundant to the emotion content in facial expressions. Finally, we show that emotional expressivity in head motion is not limited to nods and shakes and that additional gestures (such as head tilts, raises and general amount of motion) could be beneficial to automated recognition systems.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"21 1","pages":"274-280"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86097278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Region-based image retrieval based on medical media data using ranking and multi-view learning","authors":"Wei Huang, Shuru Zeng, Guang Chen","doi":"10.1109/ACII.2015.7344672","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344672","url":null,"abstract":"In this study, a novel region-based image retrieval approach via ranking and multi-view learning techniques is introduced for the first time based on medical multi-modality data. A surrogate ranking evaluation measure is derived, and direct optimization via gradient ascent is carried out based on the surrogate measure to realize ranking and learning. A database composed of 1000 real patients data is constructed and several popular pattern recognition methods are implemented for performance evaluation compared with ours. It is suggested that our new method is superior to others in this medical image retrieval utilization from the statistical point of view.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"50 1","pages":"845-850"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86114717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inducing an ironic effect in automated tweets","authors":"A. Valitutti, T. Veale","doi":"10.1109/ACII.2015.7344565","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344565","url":null,"abstract":"Irony gives us a way to react creatively to disappointment. By allowing us to speak of a failed expectation as though it succeeded, irony stresses the naturalness of our expectation and the absurdity of its failure. The result of this playful use of language is a subtle valence shift as listeners are alerted to a gap between what is said and what is meant. But as irony is not without risks, speakers are often careful to signal an ironic intent with tone, body language, or if on Twitter, with the hashtag #irony. Yet given the subtlety of irony, we question the effectiveness of explicit marking, and empirically show how a stronger valence shift can be induced in automatically-generated creative tweets with more nuanced signals of irony.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"36 1","pages":"153-159"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89311633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An experimental study of speech emotion recognition based on deep convolutional neural networks","authors":"W. Zheng, Jian Yu, Yuexian Zou","doi":"10.1109/ACII.2015.7344669","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344669","url":null,"abstract":"Speech emotion recognition (SER) is a challenging task since it is unclear what kind of features are able to reflect the characteristics of human emotion from speech. However, traditional feature extractions perform inconsistently for different emotion recognition tasks. Obviously, different spectrogram provides information reflecting difference emotion. This paper proposes a systematical approach to implement an effectively emotion recognition system based on deep convolution neural networks (DCNNs) using labeled training audio data. Specifically, the log-spectrogram is computed and the principle component analysis (PCA) technique is used to reduce the dimensionality and suppress the interferences. Then the PCA whitened spectrogram is split into non-overlapping segments. The DCNN is constructed to learn the representation of the emotion from the segments with labeled training speech data. Our preliminary experiments show the proposed emotion recognition system based on DCNNs (containing 2 convolution and 2 pooling layers) achieves about 40% classification accuracy. Moreover, it also outperforms the SVM based classification using the hand-crafted acoustic features.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"19 1","pages":"827-831"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84317516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving emotion classification on Chinese microblog texts with auxiliary cross-domain data","authors":"Huimin Wu, Qin Jin","doi":"10.1109/ACII.2015.7344668","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344668","url":null,"abstract":"Emotion classification for microblog texts has wide applications such as in social security and business marketing areas. The amount of annotated microblog texts is very limited. In this paper, we therefore study how to utilize annotated data from other domains (source domain) to improve emotion classification on microblog texts (target domain). Transfer learning has been a successful approach for cross domain learning. However, to the best of our knowledge, little attention has been paid for automatically selecting the appropriate samples from the source domain before applying transfer learning. In this paper, we propose an effective framework to sampling available data in the source domain before transfer learning, which we name as Two-Stage Sampling. The improvement of emotion classification on Chinese microblog texts demonstrates the effectiveness of our approach.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"75 1","pages":"821-826"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84339285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Engagement: A traceable motivational concept in human-robot interaction","authors":"Karl Drejing, Serge Thill, Paul E. Hemeren","doi":"10.1109/ACII.2015.7344690","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344690","url":null,"abstract":"Engagement is essential to meaningful social interaction between humans. Understanding the mechanisms by which we detect engagement of other humans can help us understand how we can build robots that interact socially with humans. However, there is currently a lack of measurable engagement constructs on which to build an artificial system that can reliably support social interaction between humans and robots. This paper proposes a definition, based on motivation theories, and outlines a framework to explore the idea that engagement can be seen as specific behaviors and their attached magnitude or intensity. This is done by the use of data from multiple sources such as observer ratings, kinematic data, audio and outcomes of interactions. We use the domain of human-robot interaction in order to illustrate the application of this approach. The framework further suggests a method to gather and aggregate this data. If certain behaviors and their attached intensities co-occur with various levels of judged engagement, then engagement could be assessed by this framework consequently making it accessible to a robotic platform. This framework could improve the social capabilities of interactive agents by adding the ability to notice when and why an agent becomes disengaged, thereby providing the interactive agent with an ability to reengage him or her. We illustrate and propose validation of our framework with an example from robot-assisted therapy for children with autism spectrum disorder. The framework also represents a general approach that can be applied to other social interactive settings between humans and robots, such as interactions with elderly people.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"7 1","pages":"956-961"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86910778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazuyuki Matsumoto, Kyosuke Akita, Minoru Yoshida, K. Kita, F. Ren
{"title":"Estimate the intimacy of the characters based on their emotional states for application to non-task dialogue","authors":"Kazuyuki Matsumoto, Kyosuke Akita, Minoru Yoshida, K. Kita, F. Ren","doi":"10.1109/ACII.2015.7344591","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344591","url":null,"abstract":"Recently, a portable digital device equipped with voice guidance has been widely used with increasing the demand for the usability-conscious dialogue system. One of the problems with the existing dialogue system is its immature application to non-task dialogue. Non-task-oriented dialogue requires some schemes that enable smooth and flexible conversations with a user. For example, it would be possible to go beyond the closed relationship between the system and the user by considering the user's relationship with others in real life. In this paper, we focused on the dialogue made by the two characters in a drama scenario, and tried to express their relationship with a scale of “intimacy degree.” There will be such various elements related to the intimacy degree as the frequency of response to the utterance and the attitude of a speaker during the dialogue. We focused on the emotional state of the speaker during the utterance and tried to realize intimacy estimation with higher accuracy. As the evaluation result, we achieved higher accuracy in intimacy estimation than the existing method based on speech role.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"66 1","pages":"327-333"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85614794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition","authors":"Yelin Kim","doi":"10.1109/ACII.2015.7344653","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344653","url":null,"abstract":"My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"16 1","pages":"748-753"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84193534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. McKeown, W. Curran, J. Wagner, F. Lingenfelser, E. André
{"title":"The Belfast storytelling database: A spontaneous social interaction database with laughter focused annotation","authors":"G. McKeown, W. Curran, J. Wagner, F. Lingenfelser, E. André","doi":"10.1109/ACII.2015.7344567","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344567","url":null,"abstract":"To support the endeavor of creating intelligent interfaces between computers and humans the use of training materials based on realistic human-human interactions has been recognized as a crucial task. One of the effects of the creation of these databases is an increased realization of the importance of often overlooked social signals and behaviours in organizing and orchestrating our interactions. Laughter is one of these key social signals; its importance in maintaining the smooth flow of human interaction has only recently become apparent in the embodied conversational agent domain. In turn, these realizations require training data that focus on these key social signals. This paper presents a database that is well annotated and theoretically constructed with respect to understanding laughter as it is used within human social interaction. Its construction, motivation, annotation and availability are presented in detail in this paper.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"30 1","pages":"166-172"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78146251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}