2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)最新文献

筛选
英文 中文
Learning Temporal and Bodily Attention in Protective Movement Behavior Detection 保护动作行为检测中的时间注意和身体注意学习
Chongyang Wang, Min Peng, Temitayo A. Olugbade, N. Lane, A. Williams, N. Bianchi-Berthouze
{"title":"Learning Temporal and Bodily Attention in Protective Movement Behavior Detection","authors":"Chongyang Wang, Min Peng, Temitayo A. Olugbade, N. Lane, A. Williams, N. Bianchi-Berthouze","doi":"10.1109/ACIIW.2019.8925084","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925084","url":null,"abstract":"For people with chronic pain, the assessment of protective behavior during physical functioning is essential to understand their subjective pain-related experiences (e.g., fear and anxiety toward pain and injury) and how they deal with such experiences (avoidance or reliance on specific body joints), with the ultimate goal of guiding intervention. Advances in deep learning (DL) can enable the development of such intervention. Using the EmoPain MoCap dataset, we investigate how attention-based DL architectures can be used to improve the detection of protective behavior by capturing the most informative temporal and body configurational cues characterizing specific movements and the strategies used to perform them. We propose an end-to-end deep learning architecture named BodyAttentionNet (BANet). BANet is designed to learn temporal and bodily parts that are more informative to the detection of protective behavior. The approach addresses the variety of ways people execute a movement (including healthy people) independently of the type of movement analyzed. Through extensive comparison experiments with other state-of-the-art machine learning techniques used with motion capture data, we show statistically significant improvements achieved by using these attention mechanisms. In addition, the BANet architecture requires a much lower number of parameters than the state of the art for comparable if not higher performances.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117183007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Enriching Discrete Actions with Impactful Emotions 用有影响力的情绪丰富离散的行动
R. Rodrigues
{"title":"Enriching Discrete Actions with Impactful Emotions","authors":"R. Rodrigues","doi":"10.1109/ACIIW.2019.8925107","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925107","url":null,"abstract":"Believable interactions between synthetic characters are an important factor defining the success of a virtual environment relying on human participants being able to create emotional bonds with artificial characters. Not only is it important for characters to be believable, but also interactions with or between such characters too. Previously we created 3Motion, a model for synthetic character interaction based on anticipation and emotion that allows for precise affective communication of intention-based behaviors, which improves the believability of both synthetic characters and their actions. Currently the model has been evaluated using a low fidelity text-based prototype and shown to create believable interactions. We want to improve the model by adapting it to 3D real-time virtual environment, specifically using an online learning environment virtual coaching platform, and evaluate it in a real world scenario. Measuring believability, but also trying to assess the impact on the user learning experience.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115514891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wearable Sensing Technology for Capturing and Sharing Emotional Experience of Running 捕捉和分享跑步情感体验的可穿戴传感技术
Tao Bi
{"title":"Wearable Sensing Technology for Capturing and Sharing Emotional Experience of Running","authors":"Tao Bi","doi":"10.1109/ACIIW.2019.8925104","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925104","url":null,"abstract":"Running is one of the most popular exercises for maintaining physically active. Current technology is limited to capturing performance measurements such as speed, distance, cadence, etc. Research shows that the running experience is much richer and more complex and go well beyond what existing technology can currently provide. This research addresses the question of how wearable sensing technology can capture, recognize, represent, and share the emotional, physiological, cognitive experience of long-distance running. Taking a phenomenologically situated perspective, this research focuses on the running experience in real-life contexts (e.g. outdoor training and marathon) rather than the controlled laboratory conditions. The first phase of this research focuses on understanding the experience of running from the runner and spectator's perspectives. The second phase explores affective computing techniques and ubiquitous wearable sensors to capture and share aspects of it.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"61 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123299518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A stress recognition system using HRV parameters and machine learning techniques 使用HRV参数和机器学习技术的应力识别系统
Giorgos Giannakakis, K. Marias, M. Tsiknakis
{"title":"A stress recognition system using HRV parameters and machine learning techniques","authors":"Giorgos Giannakakis, K. Marias, M. Tsiknakis","doi":"10.1109/ACIIW.2019.8925142","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925142","url":null,"abstract":"In this study, we investigate reliable heart rate variability (HRV) parameters in order to recognize stress. An experiment protocol was established including different stressors which correspond to a range of everyday life conditions. A personalized baseline was formulated for each participant in order to eliminate inter-subject variability and to normalize data providing a common reference for the whole dataset. The extracted HRV features were transformed accordingly using the pairwise transformation in order to take into account the personalized baseline of each phase in constructing the stress model. The most robust features were selected using the minimum Redundancy Maximum Relevance (mRMR) selection algorithm. The selected features fed machine learning systems achieving a classification accuracy of 84.4% using 10-fold cross-validation.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121761296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
A Sensor-based Framework for Real-time Detection and Alleviation of Public Speaking Anxiety 基于传感器的演讲焦虑实时检测与缓解框架
Everlyne Kimani
{"title":"A Sensor-based Framework for Real-time Detection and Alleviation of Public Speaking Anxiety","authors":"Everlyne Kimani","doi":"10.1109/ACIIW.2019.8925262","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925262","url":null,"abstract":"Oral presentations are important and challenging tasks that most people struggle with due to public speaking anxiety. To date, there are fewer research interventions that are designed to assist presenters manage their anxiety in real-time during presentations. This research attempts to fill this gap by exploring ways in which sensor-driven technologies can help reduce anxiety during presentations. To help presenters manage their anxiety, I will design and evaluate an automated real-time framework for detecting public speaking anxiety and explore behavioral just-in-time techniques that presenters can use while presenting. The automated framework will be guided by a physiological detection model, use a virtual agent as the user interface, and will assist presenters during their presentation.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125801971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Feedback of Physiological-Based Emotion before Publishing Emotional Expression on Social Media 在社交媒体上发表情感表达前的生理情感反馈
Feng Chen, Peeraya Sripian, Midori Sugaya
{"title":"Feedback of Physiological-Based Emotion before Publishing Emotional Expression on Social Media","authors":"Feng Chen, Peeraya Sripian, Midori Sugaya","doi":"10.1109/ACIIW.2019.8925097","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925097","url":null,"abstract":"Making emotional expressions on social media has recently become an ordinary part of life, but sometimes people might send messages with the wrong expression to other people through these media based on unconscious emotions such as anger. However, it is often difficult to recognize these unconscious emotions, and easy to send inappropriate expressions to other people without proper consideration. This could cause an unpleasant experience. To avoid these situations, it is expected that some observable mechanism could detect and communicate the unconscious emotions to the user before they send the message. These days, there are approaches that can detect unconscious emotions using physiological sensors such as EEGs and heartbeat sensors. These approaches provide the procedure to make unconscious emotions observable and communicated to the user in real-time. We apply this technology for detecting the mismatch between the unconscious emotion and expression before sending the message. Based on this idea, we design and implement the mechanism for detecting the mismatch and feed it back to the user of social media. We carry out an experiment using the proposed system. The preliminary result shows that the system tends to be effective for the purpose.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126065135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-Modal Depression Detection and Estimation 多模态凹陷检测与估计
Le Yang
{"title":"Multi-Modal Depression Detection and Estimation","authors":"Le Yang","doi":"10.1109/ACIIW.2019.8925288","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925288","url":null,"abstract":"Depression and anxiety disorders are critical problems in modern society. The WHO studies suggest that roughly 12.8 percent of the world's population are suffering from a depressive disorder. In this work, we propose several novel approaches towards multi-modal depression detection and estimation. Our previous studies mainly explored the multi-modal features and multi-modal fusion strategies, experimental results showed that the proposed hybrid depression classification and estimation multi-modal fusion framework obtains promising performance. The current work contains two parts: 1) In order to mitigate the impact of lack of data on training depression deep models, we utilize Generative Adversarial Network (GAN) to augment depression audio features, so as to improve depression severity estimation performance. 2) We propose a novel FACS3D-Net to integrate $3D$ and $2D$ convolution network for facial Action Unit (AU) detection. As far as we know, this is the first work to apply $3D$ CNN to the problem of AU detection. Our future work will 1) focus on combining depression estimation with dimensional affective analysis through the proposed FACS3D-Net, and 2) collect Chinese depression database. When completed, these studies will compose the author's dissertation.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129460037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Local Second-Order Gradient Cross Pattern for Automatic Depression Detection 局部二阶梯度交叉模式的自动降压检测
Mingyue Niu, J. Tao, Bin Liu
{"title":"Local Second-Order Gradient Cross Pattern for Automatic Depression Detection","authors":"Mingyue Niu, J. Tao, Bin Liu","doi":"10.1109/ACIIW.2019.8925158","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925158","url":null,"abstract":"Depression is a psychiatric disorder that seriously affects people's work and life. At present, the development of the automatic depression detection technology has become the focus of many researchers due to the serious imbalance of doctor-patient ratio. Physiological studies have revealed that there are differences in facial activity between normal and depressed individuals, so some works has been done to detect depression by extracting facial features. However, these works are limited in capturing the subtle changes. For these reasons, this paper proposes a novel local pattern named Local Second-Order Gradient Cross Pattern (LSOGCP) to extract the subtle facial dynamics in videos to improve the accuracy of depression detection. In particular, we firstly obtain LSOGCP feature through high-order gradient and cross coding scheme to characterize the detailed texture of each frame. Then LSOGCP histograms from three orthogonal planes (TOP) are generated to form the video representation denoted as LSOGCP-TOP. Finally, a hierarchical method of between-group classification and within-group regression is employed to predict the score of depression severity. Experiments are conducted on two publicly available databases i.e. AVEC2013 and AVEC2014. The results demonstrate that our proposed method achieves better performance than the previous algorithms.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126924533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
What Affects the Performance of Convolutional Neural Networks for Audio Event Classification 什么影响了卷积神经网络在音频事件分类中的性能
Helin Wang, Dading Chong, Dongyan Huang, Yuexian Zou
{"title":"What Affects the Performance of Convolutional Neural Networks for Audio Event Classification","authors":"Helin Wang, Dading Chong, Dongyan Huang, Yuexian Zou","doi":"10.1109/ACIIW.2019.8925277","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925277","url":null,"abstract":"Convolutional neural networks (CNN) have played an important role in Audio Event Classification (AEC). Both 1D-CNN and 2D-CNN methods have been applied to improve the classification accuracy of AEC, and there are many factors affecting the performance of models based on CNN. In this paper, we study different factors affecting the performance of CNN for AEC, including sampling rate, signal segmentation methods, window size, mel bins and filter size. The segmentation method of the event signal is an important one among them. It may lead to overfitting problem because audio events usually happen only for a short duration. We propose a signal segmentation method called Fill-length Processing to address the problem. Based on our study of these factors, we design convolutional neural networks for audio event classification (called FPNet). On the environmental sounds dataset ESC-50, the classification accuracies of FPNet-1D and FPNet-2D achieve 73.90% and 85.10% respectively, which improve significantly comparing to the previous methods.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121508482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Gram Matrices Formulation of Body Shape Motion: An Application for Depression Severity Assessment 体型运动的克氏矩阵公式:在抑郁症严重程度评估中的应用
M. Daoudi, Z. Hammal, Anis Kacem, J. Cohn
{"title":"Gram Matrices Formulation of Body Shape Motion: An Application for Depression Severity Assessment","authors":"M. Daoudi, Z. Hammal, Anis Kacem, J. Cohn","doi":"10.1109/ACIIW.2019.8925009","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925009","url":null,"abstract":"We propose an automatic method to measure depression severity from body movement dynamics in participants undergoing treatment for depression. Participants in a clinical trial for treatment of depression were interviewed on up to four occasions at 7-week intervals with the clinician-administered Hamilton Rating Scale for Depression. Body movement was tracked using OpenPose from full-body video recordings of the interviews. Gram matrices formulation was used for body shape and trajectory representations from each video interview. Kinematic features were extracted and encoded for video based representation using Gaussian Mixture Models (GMM) and Fisher vector encoding. A multi-class SVM was used to classify the encoded body movement dynamics into three levels of depression severity: severe, mild, and remission. Accuracy was high for severe depression (68.57%) followed by mild depression (56%), and then remission (37.93%). The obtained results suggest that automatic detection of depression severity from body movement is feasible.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134097928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信