{"title":"Mamba Adaptive Anomaly Transformer with association discrepancy for time series","authors":"Abdellah Zakaria Sellam , Ilyes Benaissa , Abdelmalik Taleb-Ahmed , Luigi Patrono , Cosimo Distante","doi":"10.1016/j.engappai.2025.111685","DOIUrl":null,"url":null,"abstract":"<div><div>Anomaly detection in time series poses a critical challenge in industrial monitoring, environmental sensing, and infrastructure reliability, where accurately distinguishing anomalies from complex temporal patterns remains an open problem. While existing methods, such as the Anomaly Transformer leveraging multi-layer association discrepancy between prior and series distributions and Dual Attention Contrastive Representation Learning architecture (DCdetector) employing dual-attention contrastive learning, have advanced the field, critical limitations persist. These include sensitivity to short-term context windows, computational inefficiency, and degraded performance under noisy and non-stationary real-world conditions. To address these challenges, we present MAAT (Mamba Adaptive Anomaly Transformer), an enhanced architecture that refines association discrepancy modeling and reconstruction quality for more robust anomaly detection. Our work introduces two key contributions to the existing Anomaly transformer architecture: Sparse Attention, which computes association discrepancy more efficiently by selectively focusing on the most relevant time steps. This reduces computational redundancy while effectively capturing long-range dependencies critical for discerning subtle anomalies. A Mamba-Selective State Space Model (Mamba-SSM) is also integrated into the reconstruction module. A skip connection bridges the original reconstruction and the Mamba-SSM output, while a Gated Attention mechanism adaptively fuses features from both pathways. This design balances fidelity and contextual enhancement dynamically, improving anomaly localization and overall detection performance. Extensive experiments on benchmark datasets demonstrate that MAAT significantly outperforms prior methods, achieving superior anomaly distinguishability and generalization across diverse time series applications. By addressing the limitations of existing approaches, MAAT sets a new standard for unsupervised time series anomaly detection in real-world scenarios. Code available at <span><span>https://github.com/ilyesbenaissa/MAAT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"160 ","pages":"Article 111685"},"PeriodicalIF":7.5000,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625016872","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Anomaly detection in time series poses a critical challenge in industrial monitoring, environmental sensing, and infrastructure reliability, where accurately distinguishing anomalies from complex temporal patterns remains an open problem. While existing methods, such as the Anomaly Transformer leveraging multi-layer association discrepancy between prior and series distributions and Dual Attention Contrastive Representation Learning architecture (DCdetector) employing dual-attention contrastive learning, have advanced the field, critical limitations persist. These include sensitivity to short-term context windows, computational inefficiency, and degraded performance under noisy and non-stationary real-world conditions. To address these challenges, we present MAAT (Mamba Adaptive Anomaly Transformer), an enhanced architecture that refines association discrepancy modeling and reconstruction quality for more robust anomaly detection. Our work introduces two key contributions to the existing Anomaly transformer architecture: Sparse Attention, which computes association discrepancy more efficiently by selectively focusing on the most relevant time steps. This reduces computational redundancy while effectively capturing long-range dependencies critical for discerning subtle anomalies. A Mamba-Selective State Space Model (Mamba-SSM) is also integrated into the reconstruction module. A skip connection bridges the original reconstruction and the Mamba-SSM output, while a Gated Attention mechanism adaptively fuses features from both pathways. This design balances fidelity and contextual enhancement dynamically, improving anomaly localization and overall detection performance. Extensive experiments on benchmark datasets demonstrate that MAAT significantly outperforms prior methods, achieving superior anomaly distinguishability and generalization across diverse time series applications. By addressing the limitations of existing approaches, MAAT sets a new standard for unsupervised time series anomaly detection in real-world scenarios. Code available at https://github.com/ilyesbenaissa/MAAT.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.