2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)最新文献

筛选
英文 中文
An investigation on the automatic generation of music and its application into video games 音乐自动生成及其在电子游戏中的应用研究
Germán Ruiz Marcos
{"title":"An investigation on the automatic generation of music and its application into video games","authors":"Germán Ruiz Marcos","doi":"10.1109/ACIIW.2019.8925275","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925275","url":null,"abstract":"This paper presents a description of the author's PhD research plan and its progress to date. By way of introduction, some gaps and challenges are pointed out concerning algorithmic composition and its literature. Motivated by these, a set of research questions are given, which explore the possibility of generating music matching tension and its applications in video games. To give a brief overview of the background, the most relevant models of tension in music are introduced, as well as the most recent pieces of related work. The research approach is then presented as a summary of the scope of the problem, according to the gaps motivating the project and the challenges that come from the related work, and the appropriate methodology to explore the research questions. A brief review of the work to date is included, emphasising the design of an automatic music generator and the empirical study carried out to test its capabilities. To conclude, the steps presented in the methodology are transformed into a future plan and some research contributions.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130409269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Group Level Affect and Cohesion Prediction in Videos 视频中的自动组级影响和衔接预测
Garima Sharma, Shreya Ghosh, Abhinav Dhall
{"title":"Automatic Group Level Affect and Cohesion Prediction in Videos","authors":"Garima Sharma, Shreya Ghosh, Abhinav Dhall","doi":"10.1109/ACIIW.2019.8925231","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925231","url":null,"abstract":"This paper proposes a database for group level emotion recognition in videos. The motivation is coming from the large number of information which the users are sharing online. This gives us the opportunity to use this perceived affect for various tasks. Most of the work in this area has been restricted to controlled environments. In this paper, we explore the group level emotion and cohesion in a real-world environment. There are several challenges involved in moving from a controlled environment to real-world scenarios such as face tracking limitations, illumination variations, occlusion and type of gatherings. As an attempt to address these challenges, we propose a ‘Video level Group AFfect (VGAF), database containing 1,004 videos downloaded from the web. The collected videos have a large variations in terms of gender, ethnicity, the type of social event, number of people, pose, etc. We have labelled our database for group level emotion and cohesion tasks and proposed a baseline based on the Inception V3 network on the database.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127521507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
An Open-Source Avatar for Real-Time Human-Agent Interaction Applications 用于实时人机交互应用的开源化身
Kevin El Haddad, F. Zajéga, T. Dutoit
{"title":"An Open-Source Avatar for Real-Time Human-Agent Interaction Applications","authors":"Kevin El Haddad, F. Zajéga, T. Dutoit","doi":"10.1109/ACIIW.2019.8925115","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925115","url":null,"abstract":"In this work we present an open-source avatar project aiming to be used in Human-Agent Interaction systems. We demonstrate here a system that is portable, works in real-time and implemented in a modular way. This is an attempt to respond to the need of the scientific community for an open-source, free of use, efficient virtual agent flexible enough to be used in projects varying from rapid prototyping and system testing to entire HAI applications and experiments.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124183982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Wearable Facial Action Unit Classification from Near-field Infrared Eye Images using Deformable Models 基于变形模型的近场红外眼图像可穿戴面部动作单元分类
Hang Li, Siyuan Chen, J. Epps
{"title":"Wearable Facial Action Unit Classification from Near-field Infrared Eye Images using Deformable Models","authors":"Hang Li, Siyuan Chen, J. Epps","doi":"10.1109/ACIIW.2019.8925131","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925131","url":null,"abstract":"Developments in head-mounted displays and wearable eyewear devices provide an opportunity to recognize expressions from near-field images, however this is not an area that has received significant research attention to date. In this study, we explore the identification of seven basic upper facial action units by analyzing just part of the face, i.e. infrared eye image, and applying a deformable eye model to obtain detailed information from the eye region. Based on this eye model, a novel feature extraction method is proposed that is a hybrid of geometric and appearance features extracted from around the edge of the eye. Evaluation on a novel database of near-field infrared facial expressions from 16 participants shows that 7-class upper face action unit classification can achieve around 78.8% accuracy using the proposed method, which is promising for wearable applications of automatic facial expression analysis.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122152467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Stylometry and Machine Learning for Gender and Age Detection in Cyberbullying Texts 计算机文体学和机器学习在网络欺凌文本中的性别和年龄检测
A. Pascucci, Vincenzo Masucci, J. Monti
{"title":"Computational Stylometry and Machine Learning for Gender and Age Detection in Cyberbullying Texts","authors":"A. Pascucci, Vincenzo Masucci, J. Monti","doi":"10.1109/ACIIW.2019.8925101","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925101","url":null,"abstract":"The aim of this paper is to show the importance of Computational Stylometry (CS) and Machine Learning (ML) support in author's gender and age detection in cyberbullying texts. We developed a cyberbullying detection platform and we show the results of performances in terms of Precision, Recall and F -Measure for gender and age detection in cyberbullying texts we collected.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129648173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Using machine learning to generate engaging behaviours in immersive virtual environments 使用机器学习在沉浸式虚拟环境中产生引人入胜的行为
Georgiana Cristina Dobre
{"title":"Using machine learning to generate engaging behaviours in immersive virtual environments","authors":"Georgiana Cristina Dobre","doi":"10.1109/ACIIW.2019.8925113","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925113","url":null,"abstract":"Our work aims at implementing autonomous agents for Immersive Virtual Reality (IVR). With the advances in IVR environments, users can be more engaged and respond realistically to the events delivered in IVR, a state described in literature as presence. Agents with engaging verbal and nonverbal behaviour help preserve the sense of presence in IVR. For instance, gaze behaviour plays an important role, having monitoring and communicative functions. The initial step is to look at a machine learning model that generates flexible and contextual gaze behaviour and takes into account the rapport between the user and the agent. In this paper, we present our progress to date on the problem of creating realistic nonverbal behaviour. This includes analysing a multimodal dyad data, creating a data-processing pipeline, implementing a Hidden Markov Model and linking the Python scripts with the VR game engine (Unity3D). Future work consists of using richer data for more complex machine learning models, with a final aim of integrating the gaze model (plus future nonverbal behaviour models) into an autonomous virtual character framework.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132618485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“I Need a Hug Right Now”: Affective Support Through Remote Touch Technology “我现在需要一个拥抱”:通过远程触摸技术的情感支持
Angela Chan
{"title":"“I Need a Hug Right Now”: Affective Support Through Remote Touch Technology","authors":"Angela Chan","doi":"10.1109/ACIIW.2019.8925135","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925135","url":null,"abstract":"Touch is a contextual medium for affective conveyance. It is difficult to assign specific meanings to each instance of touch in isolation. However, when engaged within context, touch becomes a powerful and indispensable means for humans to convey feelings of love, concern, and sympathy. This dissertation focuses on how remote touch functioning with other contextual channels like speech may help to intensify or modulate one's affective experiences.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125822172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A multi-layer artificial intelligence and sensing based affective conversational embodied agent 一种基于多层人工智能和感知的情感会话具身体
S. DiPaola, Ö. Yalçın
{"title":"A multi-layer artificial intelligence and sensing based affective conversational embodied agent","authors":"S. DiPaola, Ö. Yalçın","doi":"10.1109/ACIIW.2019.8925291","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925291","url":null,"abstract":"Building natural and conversational virtual humans is a task of formidable complexity. We believe that, especially when building agents that affectively interact with biological humans in real-time, a cognitive science-based, multilayered sensing and artificial intelligence (AI) systems approach is needed. For this demo, we show a working version (through human interaction with it) our modular system of natural, conversation 3D virtual human using AI or sensing layers. These including sensing the human user via facial emotion recognition, voice stress, semantic meaning of the words, eye gaze, heart rate, and galvanic skin response. These inputs are combined with AI sensing and recognition of the environment using deep learning natural language captioning or dense captioning. These are all processed by our AI avatar system allowing for an affective and empathetic conversation using an NLP topic-based dialogue capable of using facial expressions, gestures, breath, eye gaze and voice language-based two-way back and forth conversations with a sensed human. Our lab has been building these systems in stages over the years.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116825346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Studying the Effect of Robot Frustration on Children's Change of Perspective 研究机器人挫折对儿童视角变化的影响
Elmira Yadollahi, W. Johal, João Dias, P. Dillenbourg, Ana Paiva
{"title":"Studying the Effect of Robot Frustration on Children's Change of Perspective","authors":"Elmira Yadollahi, W. Johal, João Dias, P. Dillenbourg, Ana Paiva","doi":"10.1109/ACIIW.2019.8925100","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925100","url":null,"abstract":"The use of robots as peers is more and more studied in human-robot interaction with co-learning interactions being complex and rich involving cognitive, affective, verbal and non-verbal processes. We aim to study the co-learning interaction with robots in the light of perspective-taking; a cognitive dimension that is important for interaction, engagement, and learning of the child. This work-in-progress details one of the studies we are developing in understanding perspective-taking from the Piagetian point of view. The study tried to understand how changes in the robot's cognitive-affective state affect children's behavior, emotional state, and perception of the robot. The experiment details a scenario in which child and the robot take turn to play a game by instructing their counterpart to reach a goal. The interaction consists of a condition in which the robot expresses frustration when the child gives egocentric instructions. We manipulate the robot's emotional responses to the child's instructions as our independent variable. We hypothesize that children will try to change their perspective more when the robot expresses frustration and follow the instructions wrongly, e.g. does not understand their perspective. Moreover, in the frustration groups, we are interested to observe if children reciprocates the robot's behavior by showing frustration to the robot if it is egocentric. Consequently, we expect our analyses to help us to integrate a perspective-taking model in our robotic platform that can adapt its perspective according to educational or social aspect of the interaction.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114836506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Social touch: stimuli-imitation protocol and automated recognition 社交接触:刺激模仿协议和自动识别
Wail Elbani
{"title":"Social touch: stimuli-imitation protocol and automated recognition","authors":"Wail Elbani","doi":"10.1109/ACIIW.2019.8925025","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925025","url":null,"abstract":"Touch plays an important role in socio-emotional communication: a loving hug from a spouse, a comforting tap on the back by friend, a push from a stranger convey efficiently an emotion or an intent, specially when combined with other modalities such as audition and vision. In a parent-child relationship, the frequency of maternal touch is positively correlated to the a child early social development. In addition, Touch occurs frequently in patient care situation: a massage therapy, for instance, reduces chronic pain and alleviates anxiety when properly applied by a nurse. Underexplored by the human-machine-interaction and the affective computing literature, the purpose of this thesis is to integrate the sense of touch into interactive systems. In this paper we introduce a novel data collection framework for social touch and present the social touch recognition pipeline.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124176757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信