{"title":"Towards explainable trajectory classification: A segment-based perturbation approach","authors":"Le Xuan Tung , Bui Dang Phuc , Vo Nguyen Le Duy","doi":"10.1016/j.neucom.2025.131691","DOIUrl":null,"url":null,"abstract":"<div><div>Trajectory classification is essential in applications such as transportation analysis, wildlife tracking, and human mobility studies. However, many existing models, especially deep learning-based approaches, suffer from a lack of explainability, making it challenging to understand their decision-making processes. To address this issue, we propose a model-agnostic explainability framework for trajectory classification based on subsegment perturbation. Our method systematically perturbs individual trajectory subsegments and constructs an importance map to highlight their contributions to the classification outcome. Additionally, we also propose a novel fidelity to assess the ability to provide interpretations as well as the quality of the interpretations. We evaluate the framework using multiple benchmark trajectory datasets and various classifiers, including both traditional machine learning models and deep learning models. Experimental results demonstrate that our method provides effective and meaningful explanations, especially the flexibility to be applied to many types of models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"658 ","pages":"Article 131691"},"PeriodicalIF":6.5000,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092523122502363X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Trajectory classification is essential in applications such as transportation analysis, wildlife tracking, and human mobility studies. However, many existing models, especially deep learning-based approaches, suffer from a lack of explainability, making it challenging to understand their decision-making processes. To address this issue, we propose a model-agnostic explainability framework for trajectory classification based on subsegment perturbation. Our method systematically perturbs individual trajectory subsegments and constructs an importance map to highlight their contributions to the classification outcome. Additionally, we also propose a novel fidelity to assess the ability to provide interpretations as well as the quality of the interpretations. We evaluate the framework using multiple benchmark trajectory datasets and various classifiers, including both traditional machine learning models and deep learning models. Experimental results demonstrate that our method provides effective and meaningful explanations, especially the flexibility to be applied to many types of models.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.