{"title":"Automatic canine emotion recognition through multimodal approach","authors":"Eliaf Garcia-Loya , Irvin Hussein Lopez-Nava , Humberto Pérez-Espinosa , Veronica Reyes-Meza , Mariel Urbina-Escalante","doi":"10.1016/j.patrec.2025.06.018","DOIUrl":null,"url":null,"abstract":"<div><div>This study introduces a comprehensive multimodal approach for analyzing and classifying emotions in dogs, combining visual, inertial, and physiological data to improve emotion recognition performance. The research focuses on the dimensions of valence and arousal to categorize dog emotions into four quadrants: playing, frustration, abandonment, and petting. A custom-developed device (PATITA) was used for synchronized data collection to which a feature extraction process based on windowing was done. Dimensionality reduction and feature selection techniques were applied to identified most relevant features across data types. Then, several unimodal and multimodal classification models, including Naïve Bayes, SVM, ExtraTrees, and kNN, were trained and evaluated. Experimental results demonstrated the superiority of the multimodal approach, with ExtraTrees classifier consistently yielding the best results (F1-score = 0.96), using the reduced feature set. In conclusion, this work presents a robust multimodal framework for canine emotion recognition, providing a foundation for future studies to refine techniques and overcome current limitations, particularly through more sophisticated models and expanded data collection.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 351-357"},"PeriodicalIF":3.3000,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525002466","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This study introduces a comprehensive multimodal approach for analyzing and classifying emotions in dogs, combining visual, inertial, and physiological data to improve emotion recognition performance. The research focuses on the dimensions of valence and arousal to categorize dog emotions into four quadrants: playing, frustration, abandonment, and petting. A custom-developed device (PATITA) was used for synchronized data collection to which a feature extraction process based on windowing was done. Dimensionality reduction and feature selection techniques were applied to identified most relevant features across data types. Then, several unimodal and multimodal classification models, including Naïve Bayes, SVM, ExtraTrees, and kNN, were trained and evaluated. Experimental results demonstrated the superiority of the multimodal approach, with ExtraTrees classifier consistently yielding the best results (F1-score = 0.96), using the reduced feature set. In conclusion, this work presents a robust multimodal framework for canine emotion recognition, providing a foundation for future studies to refine techniques and overcome current limitations, particularly through more sophisticated models and expanded data collection.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.