Tao Wang , Shuang Liu , Feng He , Minghao Du , Weina Dai , Yufeng Ke , Dong Ming
{"title":"基于时空融合特征的肢体表情情感识别框架","authors":"Tao Wang , Shuang Liu , Feng He , Minghao Du , Weina Dai , Yufeng Ke , Dong Ming","doi":"10.1016/j.knosys.2024.112744","DOIUrl":null,"url":null,"abstract":"<div><div>Affective body expression recognition technology enables machines to interpret non-verbal emotional signals from human movements, which is crucial for facilitating natural and empathetic human–machine interaction (HCI). This work proposes a new framework for emotion recognition from body movements, providing a universal and effective solution for decoding the temporal–spatial mapping between emotions and body expressions. Compared with previous studies, our approach extracted interpretable temporal and spatial features by constructing a body expression energy model (BEEM) and a multi-input symmetric positive definite matrix network (MSPDnet). In particular, the temporal features extracted from the BEEM reveal the energy distribution, dynamical complexity, and frequency activity of the body expression under different emotions, while the spatial features obtained by MSPDnet capture the spatial Riemannian properties between body joints. Furthermore, this paper introduces an attentional temporal–spatial feature fusion (ATSFF) algorithm to adaptively fuse temporal and spatial features with different semantics and scales, significantly improving the discriminability and generalizability of the fused features. The proposed method achieves recognition accuracies over 90% across four public datasets, outperforming most state-of-the-art approaches.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"308 ","pages":"Article 112744"},"PeriodicalIF":7.2000,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Affective body expression recognition framework based on temporal and spatial fusion features\",\"authors\":\"Tao Wang , Shuang Liu , Feng He , Minghao Du , Weina Dai , Yufeng Ke , Dong Ming\",\"doi\":\"10.1016/j.knosys.2024.112744\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Affective body expression recognition technology enables machines to interpret non-verbal emotional signals from human movements, which is crucial for facilitating natural and empathetic human–machine interaction (HCI). This work proposes a new framework for emotion recognition from body movements, providing a universal and effective solution for decoding the temporal–spatial mapping between emotions and body expressions. Compared with previous studies, our approach extracted interpretable temporal and spatial features by constructing a body expression energy model (BEEM) and a multi-input symmetric positive definite matrix network (MSPDnet). In particular, the temporal features extracted from the BEEM reveal the energy distribution, dynamical complexity, and frequency activity of the body expression under different emotions, while the spatial features obtained by MSPDnet capture the spatial Riemannian properties between body joints. Furthermore, this paper introduces an attentional temporal–spatial feature fusion (ATSFF) algorithm to adaptively fuse temporal and spatial features with different semantics and scales, significantly improving the discriminability and generalizability of the fused features. The proposed method achieves recognition accuracies over 90% across four public datasets, outperforming most state-of-the-art approaches.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"308 \",\"pages\":\"Article 112744\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2024-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705124013789\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705124013789","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Affective body expression recognition framework based on temporal and spatial fusion features
Affective body expression recognition technology enables machines to interpret non-verbal emotional signals from human movements, which is crucial for facilitating natural and empathetic human–machine interaction (HCI). This work proposes a new framework for emotion recognition from body movements, providing a universal and effective solution for decoding the temporal–spatial mapping between emotions and body expressions. Compared with previous studies, our approach extracted interpretable temporal and spatial features by constructing a body expression energy model (BEEM) and a multi-input symmetric positive definite matrix network (MSPDnet). In particular, the temporal features extracted from the BEEM reveal the energy distribution, dynamical complexity, and frequency activity of the body expression under different emotions, while the spatial features obtained by MSPDnet capture the spatial Riemannian properties between body joints. Furthermore, this paper introduces an attentional temporal–spatial feature fusion (ATSFF) algorithm to adaptively fuse temporal and spatial features with different semantics and scales, significantly improving the discriminability and generalizability of the fused features. The proposed method achieves recognition accuracies over 90% across four public datasets, outperforming most state-of-the-art approaches.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.