AFFINE '10Pub Date : 2010-10-29DOI: 10.1145/1877826.1877833
J. Broekens, A. Pronker, Marian Neuteboom
{"title":"Real time labeling of affect in music using the affectbutton","authors":"J. Broekens, A. Pronker, Marian Neuteboom","doi":"10.1145/1877826.1877833","DOIUrl":"https://doi.org/10.1145/1877826.1877833","url":null,"abstract":"Valid, reliable and quick measurement of emotion and affect is an important challenge for the use of emotion and affect in human-technology interaction. Emotion and affect can be measured in two different ways: explicit, the user is asked for feedback, and implicit, signals from the users are automatically translated to affective and emotional meaning (affect recognition). Here we focus on explicit affective feedback. More specifically, we focus on the evaluation of an affect measurement tool called the AffectButton. Previous evaluation studies [2] showed that the AffectButton enables users to give affective feedback in a low-effort, reliable and valid way. In this paper we report a study involving real-time affective labeling of movie music by primarily high school students, i.e., a realistic domain with mainstream users. Our results show that (a) users (n=21) are able to use the AffectButton in real time while listening to the music; (b) the labeling very-well follows the changes in the music and gives insight into the different affective dimensions of the music, and; (c) objective music properties correlate to these affective dimensions replicating findings of others. This provides evidence that the AffectButton is a viable affect measurement tool usable by non-expert users in real-time realistic domains.","PeriodicalId":433717,"journal":{"name":"AFFINE '10","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131635265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AFFINE '10Pub Date : 2010-10-29DOI: 10.1145/1877826.1877835
T. Baltrušaitis, L. Riek, P. Robinson
{"title":"Synthesizing expressions using facial feature point tracking: how emotion is conveyed","authors":"T. Baltrušaitis, L. Riek, P. Robinson","doi":"10.1145/1877826.1877835","DOIUrl":"https://doi.org/10.1145/1877826.1877835","url":null,"abstract":"Many approaches to the analysis and synthesis of facial expressions rely on automatically tracking landmark points on human faces. However, this approach is usually chosen because of ease of tracking rather than its ability to convey affect. We have conducted an experiment that evaluated the perceptual importance of 22 such automatically tracked feature points in a mental state recognition task. The experiment compared mental state recognition rates of participants who viewed videos of human actors and synthetic characters (physical android robot, virtual avatar, and virtual stick figure drawings) enacting various facial expressions. All expressions made by the synthetic characters were automatically generated using the 22 tracked facial feature points on the videos of the human actors. Our results show no difference in accuracy across the three synthetic representations, however, all three were less accurate than the original human actor videos that generated them. Overall, facial expressions showing surprise were more easily identifiable than other mental states, suggesting that a geometric approach to synthesis may be better suited toward some mental states than others.","PeriodicalId":433717,"journal":{"name":"AFFINE '10","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134496689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AFFINE '10Pub Date : 2010-10-29DOI: 10.1145/1877826.1877848
Peggy Wu, C. Miller
{"title":"Can polite computers produce better human performance","authors":"Peggy Wu, C. Miller","doi":"10.1145/1877826.1877848","DOIUrl":"https://doi.org/10.1145/1877826.1877848","url":null,"abstract":"We claim that the concept from human-human social interactions can be expanded and utilized to facilitate, inform, and predict human-computer interaction and perceptions. By expanding on a qualitative model of politeness proposed by Brown and Levinson we created a quantitative, computational model of etiquette that allows a machine to interpret and display politeness. The results from a human subject study show that the variables included in our model have important effects on subjects' decision making and performance in our experimental tasks. The results also demonstrate that variations in etiquette can result in objective, measurable consequences in human performance.","PeriodicalId":433717,"journal":{"name":"AFFINE '10","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133027037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AFFINE '10Pub Date : 2010-10-29DOI: 10.1145/1877826.1877846
Agnès Delaborde, L. Devillers
{"title":"Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers","authors":"Agnès Delaborde, L. Devillers","doi":"10.1145/1877826.1877846","DOIUrl":"https://doi.org/10.1145/1877826.1877846","url":null,"abstract":"We focus on audio cues required for the interaction between a human and a robot. We argue that a multi-level use of different paralinguistic cues is needed to pilot the decisions of the robot. Our challenge is to know how to use them to pilot the human-robot interaction. We offer in this paper a protocol for a study on the way paralinguistic cues can impact the human-robot interaction, by interpreting the low-level cues computed from speech into an emotional and interactional profile of the user. This study will be carried out through a game between two children and a robot.","PeriodicalId":433717,"journal":{"name":"AFFINE '10","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130531002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AFFINE '10Pub Date : 2010-10-29DOI: 10.1145/1877826.1877832
H. P. Martínez, Georgios N. Yannakakis
{"title":"Genetic search feature selection for affective modeling: a case study on reported preferences","authors":"H. P. Martínez, Georgios N. Yannakakis","doi":"10.1145/1877826.1877832","DOIUrl":"https://doi.org/10.1145/1877826.1877832","url":null,"abstract":"Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built. The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method is capable of picking subsets of features that generate more accurate affective models.","PeriodicalId":433717,"journal":{"name":"AFFINE '10","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122929603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AFFINE '10Pub Date : 2010-10-29DOI: 10.1145/1877826.1877845
Jean-David Boucher, J. Ventre-Dominey, Peter Ford Dominey, Sacha Fagel, G. Bailly
{"title":"Facilitative effects of communicative gaze and speech in human-robot cooperation","authors":"Jean-David Boucher, J. Ventre-Dominey, Peter Ford Dominey, Sacha Fagel, G. Bailly","doi":"10.1145/1877826.1877845","DOIUrl":"https://doi.org/10.1145/1877826.1877845","url":null,"abstract":"Human interaction in natural environments relies on a variety of perceptual cues to guide and stabilize the interaction. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should be able to manipulate and exploit these communicative cues in cooperation with their human partners. In the current research we identify a set of principal communicative speech and gaze cues in human-human interaction, and then formalize and implement these cues in a humanoid robot. The objective of the work is to render the humanoid robot more human-like in its ability to communicate with humans. The first phase of this research, described here, is to provide the robot with a generative capability - that is to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.","PeriodicalId":433717,"journal":{"name":"AFFINE '10","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129138230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}