{"title":"探索马的行为:可穿戴传感器数据和可解释的人工智能增强分类","authors":"Bekir Cetintav , Ahmet Yalcin","doi":"10.1016/j.jevs.2025.105568","DOIUrl":null,"url":null,"abstract":"<div><div>Understanding equine behavior through advanced monitoring technologies is crucial for improving animal welfare, optimizing training strategies, and enabling early detection of health or stress-related issues. This study integrates wearable sensor data with Explainable Artificial Intelligence (XAI) techniques, particularly SHAP (Shapley Additive Explanations), to enhance interpretability in equine behavior classification. The data used in this study were sourced from an open-source dataset, ensuring transparency and reproducibility. Orginally, data were collected from 18 horses using sensor devices attached to a collar around the neck, including a three-axis accelerometer, gyroscope, and magnetometer, sampling at 100 Hz to capture a wide range of motion data. Our dataset consists of 17 equine behavior classes, including walking, grazing, and galloping. A multi-class classification framework was developed, employing machine learning models such as Random Forest, KNN, and XGBoost. The Random Forest model outperformed others with an accuracy of 82.3 %, demonstrating its effectiveness in distinguishing complex behaviors. A key novelty of this study is the use of SHAP for feature attribution analysis, allowing us to determine which sensor modalities contribute most to each behavior class. The SHAP analysis revealed that locomotion behaviors like 'galloping' were dominated by accelerometer features capturing motion intensity, while stationary behaviors like 'standing' relied primarily on magnetometer data for orientation detection. Stress-related behaviors, such as 'head-shaking,' were characterized by gyroscopic angular velocity, highlighting their dynamic nature. By leveraging SHAP to bridge the gap between \"black-box\" machine learning models and interpretable decision-making, this study provides actionable insights for real-time monitoring, stress detection, and veterinary interventions. The findings enhance the transparency and applicability of AI-driven animal behavior analysis, setting a new benchmark for explainable behavior classification in equine studies. By advancing both predictive accuracy and model interpretability, this research lays the groundwork for more comprehensive and trustworthy applications in equine welfare and veterinary decision-making.</div></div>","PeriodicalId":15798,"journal":{"name":"Journal of Equine Veterinary Science","volume":"149 ","pages":"Article 105568"},"PeriodicalIF":1.3000,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring equine behavior: Wearable sensors data and explainable AI for enhanced classification\",\"authors\":\"Bekir Cetintav , Ahmet Yalcin\",\"doi\":\"10.1016/j.jevs.2025.105568\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Understanding equine behavior through advanced monitoring technologies is crucial for improving animal welfare, optimizing training strategies, and enabling early detection of health or stress-related issues. This study integrates wearable sensor data with Explainable Artificial Intelligence (XAI) techniques, particularly SHAP (Shapley Additive Explanations), to enhance interpretability in equine behavior classification. The data used in this study were sourced from an open-source dataset, ensuring transparency and reproducibility. Orginally, data were collected from 18 horses using sensor devices attached to a collar around the neck, including a three-axis accelerometer, gyroscope, and magnetometer, sampling at 100 Hz to capture a wide range of motion data. Our dataset consists of 17 equine behavior classes, including walking, grazing, and galloping. A multi-class classification framework was developed, employing machine learning models such as Random Forest, KNN, and XGBoost. The Random Forest model outperformed others with an accuracy of 82.3 %, demonstrating its effectiveness in distinguishing complex behaviors. A key novelty of this study is the use of SHAP for feature attribution analysis, allowing us to determine which sensor modalities contribute most to each behavior class. The SHAP analysis revealed that locomotion behaviors like 'galloping' were dominated by accelerometer features capturing motion intensity, while stationary behaviors like 'standing' relied primarily on magnetometer data for orientation detection. Stress-related behaviors, such as 'head-shaking,' were characterized by gyroscopic angular velocity, highlighting their dynamic nature. By leveraging SHAP to bridge the gap between \\\"black-box\\\" machine learning models and interpretable decision-making, this study provides actionable insights for real-time monitoring, stress detection, and veterinary interventions. The findings enhance the transparency and applicability of AI-driven animal behavior analysis, setting a new benchmark for explainable behavior classification in equine studies. By advancing both predictive accuracy and model interpretability, this research lays the groundwork for more comprehensive and trustworthy applications in equine welfare and veterinary decision-making.</div></div>\",\"PeriodicalId\":15798,\"journal\":{\"name\":\"Journal of Equine Veterinary Science\",\"volume\":\"149 \",\"pages\":\"Article 105568\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2025-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Equine Veterinary Science\",\"FirstCategoryId\":\"97\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0737080625002266\",\"RegionNum\":3,\"RegionCategory\":\"农林科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"VETERINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Equine Veterinary Science","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0737080625002266","RegionNum":3,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"VETERINARY SCIENCES","Score":null,"Total":0}
Exploring equine behavior: Wearable sensors data and explainable AI for enhanced classification
Understanding equine behavior through advanced monitoring technologies is crucial for improving animal welfare, optimizing training strategies, and enabling early detection of health or stress-related issues. This study integrates wearable sensor data with Explainable Artificial Intelligence (XAI) techniques, particularly SHAP (Shapley Additive Explanations), to enhance interpretability in equine behavior classification. The data used in this study were sourced from an open-source dataset, ensuring transparency and reproducibility. Orginally, data were collected from 18 horses using sensor devices attached to a collar around the neck, including a three-axis accelerometer, gyroscope, and magnetometer, sampling at 100 Hz to capture a wide range of motion data. Our dataset consists of 17 equine behavior classes, including walking, grazing, and galloping. A multi-class classification framework was developed, employing machine learning models such as Random Forest, KNN, and XGBoost. The Random Forest model outperformed others with an accuracy of 82.3 %, demonstrating its effectiveness in distinguishing complex behaviors. A key novelty of this study is the use of SHAP for feature attribution analysis, allowing us to determine which sensor modalities contribute most to each behavior class. The SHAP analysis revealed that locomotion behaviors like 'galloping' were dominated by accelerometer features capturing motion intensity, while stationary behaviors like 'standing' relied primarily on magnetometer data for orientation detection. Stress-related behaviors, such as 'head-shaking,' were characterized by gyroscopic angular velocity, highlighting their dynamic nature. By leveraging SHAP to bridge the gap between "black-box" machine learning models and interpretable decision-making, this study provides actionable insights for real-time monitoring, stress detection, and veterinary interventions. The findings enhance the transparency and applicability of AI-driven animal behavior analysis, setting a new benchmark for explainable behavior classification in equine studies. By advancing both predictive accuracy and model interpretability, this research lays the groundwork for more comprehensive and trustworthy applications in equine welfare and veterinary decision-making.
期刊介绍:
Journal of Equine Veterinary Science (JEVS) is an international publication designed for the practicing equine veterinarian, equine researcher, and other equine health care specialist. Published monthly, each issue of JEVS includes original research, reviews, case reports, short communications, and clinical techniques from leaders in the equine veterinary field, covering such topics as laminitis, reproduction, infectious disease, parasitology, behavior, podology, internal medicine, surgery and nutrition.