{"title":"Automatic Detection of Protective Movement Behavior with MoCap and sEMG Data for Chronic Pain Rehabilitation","authors":"Chongyang Wang","doi":"10.1109/ACIIW.2019.8925091","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925091","url":null,"abstract":"Physical rehabilitation is an important part of chronic pain (CP) management. Physiotherapists provide guidance and intervention on the exercises based on the CP patient's affective states. One important clue the physiotherapist uses to understand their patient's affective state is the presence and type of protective movement behavior. As rehabilitation is transferring from clinical settings to home-based environments, technology should provide similar service by automatically detecting the protective behavior exhibited by patients and use it as a cue to inform and adapt the support. Our research focuses on the detection of protective behavior from a deep learning (DL) perspective and using MoCap and EMG data. Based on the knowledge learned from a wider-relevant literature and the specific characteristic of protective behavior, we aim to automatically detect protective behavior with deep learning approaches and further learn its configuration pattern with explainable models. Our initial studies have demonstrated interesting accuracy improvements and also provided important knowledges about the temporal and configurational characteristics of protective behavior.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133854710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using a Quartile-based Data Transformation for Pain Intensity Classification based on the SenseEmotion Database","authors":"Peter Bellmann, Patrick Thiam, F. Schwenker","doi":"10.1109/ACIIW.2019.8925244","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925244","url":null,"abstract":"The SenseEmotion Database was collected at Ulm University for research purposes in the field of e-health. The participants of the SenseEmotion data acquisition experiments were healthy subjects exposed to three personalised levels of artificially induced pain under strictly controlled conditions. Our study focuses on the recordings from the physiological sensors, such as electrocardiography and the skin conductance. Based on that part of the data set, we propose using an unsupervised quartile-based data transformation approach, which removes outlier values for better nearest neighbour classification.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121935077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Neural Networks for Depression Recognition Based on Facial Expressions Caused by Stimulus Tasks","authors":"Weitong Guo, Hongwu Yang, Zhenyu Liu","doi":"10.1109/ACIIW.2019.8925293","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925293","url":null,"abstract":"With the growth of the global population, the proportion of individuals with depression has rapidly increased; it is currently the most prevalent mental health disorder. Although existing studies on depression have mainly examined the several databases, which comprise facial images and videos of non-Chinese subjects, there are few effective databases for a Chinese population. In this study, we first create a depression database by asking participants to perform five mood-elicitation tasks. After each task, their facial expressions are collected via a Kinect. In the depression database, the facial feature points (FFP) and facial action units (AU) are obtained. We build a range of deep belief network (DBN) models based on FFPs and AUs to extract facial features from facial expressions, named 5DBN, AU-5DBN and 5DBN-AU. We evaluate all proposed models in our built database, and the results demonstrate that (1) the recognition performance of the AU-5DBN model is higher than that of the 5DBN-AU model, and that of the single feature model is the lowest; (2) The performance of depression recognition in the positive and negative emotional stimuluses are higher than that of neutral emotional stimulus; (3) The classification rate for females is generally higher than that for males. Most importantly, the constructed database is from a real environment, i.e., several psychiatric hospitals, and has a certain scale. The experimental results show higher recognition performance in the database; thus, the proposed method is validated as effective in identifying depression.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122020211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Role of Trust and Social Behaviours in Children's Learning from Social Robots","authors":"Rebecca Stower","doi":"10.1109/ACIIW.2019.8925269","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925269","url":null,"abstract":"Understanding how social robots interact with chil-dren's learning is a topic that is currently attracting considerable interest. Yet, there is currently only loose agreement on how social behaviours in robots should best be implemented, and recent research suggests that social behaviours in robots may not always be beneficial for children's learning outcomes. There are similarly conflicting findings on the benefits of social behaviours in promoting trust in robots. This interplay between robots' social behaviours, trust, and learning is therefore yet to be fully investigated. Consequently, the goal of this dissertation is twofold; firstly, to establish a consistent definition and operationalisation of social behaviours in robots, and secondly to determine the effects of these social behaviours on children's learning and evaluations of trust. To this end, four studies are proposed. Through the lens of trust formation, breakdown, and recovery, these studies aim to help understand the role of social behaviours in children's learning from social robots.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124775192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Mavridou, E. Seiss, Theodoros Kostoulas, M. Hamedi, E. Balaguer-Ballester, C. Nduka
{"title":"Introducing the EmteqVR Interface for Affect Detection in Virtual Reality","authors":"I. Mavridou, E. Seiss, Theodoros Kostoulas, M. Hamedi, E. Balaguer-Ballester, C. Nduka","doi":"10.1109/ACIIW.2019.8925297","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925297","url":null,"abstract":"This paper introduces the wearable technology EmteqVR ™ which can detect facial expressions and physiological responses from the user's facial area whilst the user is wearing a commercial Virtual Reality head mounted display (HMD). This EmteqVR interface, an evolution of the earlier prototype called “Faceteq”, comprises nine biometric sensors including f-EMG, PPG and IMU, enabling it to detect the affective state of the user in real time. This newly developed technology can revolutionize the way we collect data, design experiences and interact within Virtual, Mixed and Augmented Realities. In addition, this novel approach could assist in healthcare interventions and future experimental studies. Our team developed a Virtual Reality experience specifically designed to induce various emotional responses to users. We will demonstrate how the current interface and a custom expression detection algorithm are used in real-time to provide feedback on the user's affective state within Virtual Reality.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130885139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combining Gated Convolutional Networks and Self-Attention Mechanism for Speech Emotion Recognition","authors":"C. Li, Jinlong Jiao, Yiqin Zhao, Ziping Zhao","doi":"10.1109/ACIIW.2019.8925283","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925283","url":null,"abstract":"Discrete speech emotion recognition (SER), the assignment of a single emotion label to an entire speech utterance, is typically performed as a sequence-to-label task. The predominant approach to SER to date is based on recurrent neural networks. Their success on this task is often linked to their ability to capture unbounded context. In this paper we introduce new gated convolutional networks and apply them to SER, which can be more efficient since they allow parallelization over sequential tokens. We present a novel model architecture that incorporates a gated convolutional neural network and a temporal attention-based localization method for speech emotion recognition. To the best of the authors' knowledge, this is the first time that such a hybrid architecture is employed for SER. We demonstrate the effectiveness of our approach on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpus. The experimental results demonstrate that our proposed model outperforms current state-of-the-art approaches.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126622783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smile Intensity Detection in Multiparty Interaction using Deep Learning","authors":"Philine Witzig, James Kennedy, Cristina Segalin","doi":"10.1109/ACIIW.2019.8925261","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925261","url":null,"abstract":"Emotion expression recognition is an important aspect for enabling decision making in autonomous agents and systems designed to interact with humans. In this paper, we present our experience in developing a software component for smile intensity detection for multiparty interaction. First, the deep learning architecture and training process is described in detail. This is followed by analysis of the results obtained from testing the trained network. Finally, we outline the steps taken to implement and visualize this network in a real-time software component.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114170887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lin Wang, Xiangmin Xu, Fang Liu, Xiaofen Xing, Bolun Cai, Weirui Lu
{"title":"Robust Emotion Navigation: Few-shot Visual Sentiment Analysis by Auxiliary Noisy Data","authors":"Lin Wang, Xiangmin Xu, Fang Liu, Xiaofen Xing, Bolun Cai, Weirui Lu","doi":"10.1109/ACIIW.2019.8925021","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925021","url":null,"abstract":"Few-shot visual sentiment analysis on social media is an important affective computing task. However, features acquired from few-shot samples are difficult, becasue the visual sentiment is a high-level integration task based on content and style. To address this issue, inspired by human learning processing, only a small number of multi-category emotions are learned from courses or specific occasions. In this paper, we propose a robust emotion navigation framework using auxiliary noisy data to re-focus on few-shot precise emotion knowledge. Firstly, we pre-trained the network on a large noisy data with cross-entropy loss, and the noise matrix can be estimated by predicted probability. Secondly, few-shot precise samples are applied as the prototype center to guide noisy data clustering. Here, the noise matrix is embedded into the loss function for re-weighting, which improves the noise robustness of the network. Finally, we relabel the noisy dataset with above joint training predictions and then re-train the network coarse-to-fine. We conduct experiments on three public sentiment datasets, including Sentibank, Twitter and Emotion6. The results demonstrate the effectiveness of the proposed method.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116052291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elena Manishina, Dylan Tweed, Guillaume Tiberi, Lorena Gayarre Pena, Nicolas Martin
{"title":"Detecting negativity in user comments using emotional maps and convolutional neural networks","authors":"Elena Manishina, Dylan Tweed, Guillaume Tiberi, Lorena Gayarre Pena, Nicolas Martin","doi":"10.1109/ACIIW.2019.8925167","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925167","url":null,"abstract":"In this paper we present a new approach to negativity detection in online user comments - an emotional image model. This model mimics image processing paradigm, where a comment is represented as a sentiment map retracing the sequence and proportions of various emotions in the text extract. We use 1D convolutional neural networks (CNN) to process 1D multichannel emotional maps which represent the emotional/sentiment image of a comment. The results show that our approach is capable of modeling and processing complex emotional patterns and detecting specific sentiments within the text image (negativity in our case) in a way similar to a classical CNN in object detection/image classification tasks.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133693346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis
{"title":"PAGAN: Platform for Audiovisual General-purpose ANnotation","authors":"Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis","doi":"10.1109/ACIIW.2019.8925149","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925149","url":null,"abstract":"This paper presents an online platform, named PAGAN, for crowdsourcing affect annotations. The platform provides researchers with an easy-access solution for labelling any type of audiovisual content. The tool currently features an annotator interface, which offers three different time-continuous, dimensional annotation tools. PAGAN aims to serve as a free online platform for crowdsourcing large affective corpora-required from data-hungry machine learning methods for modelling affect-through a publicly available webpage, which is easy to share and use by participants.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115079022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}