Lucile Bechade, Kevin El Haddad, Juliette Bourquin, S. Dupont, L. Devillers
{"title":"A corpus for experimental study of affect bursts in human-robot interaction","authors":"Lucile Bechade, Kevin El Haddad, Juliette Bourquin, S. Dupont, L. Devillers","doi":"10.1145/3139491.3139496","DOIUrl":null,"url":null,"abstract":"This paper presents a data collection carried out in the framework of the Joker Project. Interaction scenarios have been designed in order to study the e ects of a ect bursts in a human-robot interaction and to build a system capable of using multilevel a ect bursts in a human-robot interaction. We use two main audio expression cues: verbal (synthesised sentences) and nonverbal (a ect bursts). The nonverbal cues used are sounds expressing disgust, amusement, fear, misunderstanding and surprise. Three di erent intensity levels for each sound have been generating for each emotion.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3139491.3139496","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper presents a data collection carried out in the framework of the Joker Project. Interaction scenarios have been designed in order to study the e ects of a ect bursts in a human-robot interaction and to build a system capable of using multilevel a ect bursts in a human-robot interaction. We use two main audio expression cues: verbal (synthesised sentences) and nonverbal (a ect bursts). The nonverbal cues used are sounds expressing disgust, amusement, fear, misunderstanding and surprise. Three di erent intensity levels for each sound have been generating for each emotion.