Anomaly Detection for Hydraulic Systems under Test

Deniz Neufeld, Ute Schmid
{"title":"Anomaly Detection for Hydraulic Systems under Test","authors":"Deniz Neufeld, Ute Schmid","doi":"10.1109/ETFA45728.2021.9613265","DOIUrl":null,"url":null,"abstract":"This work focuses on computationally efficient difference metrics of time series and compares two different unsupervised methods for anomaly classification. It takes place in the domain of hardware systems testing for reliability, where several structurally identical devices are tested at the same time with a load expected in their lifetime use. The devices perform different maneuvers in predefined testing cycles. It is possible that rare, unexpected system defects appear. They often show up in the measured data signals of the system, for example as a decrease in the output pressure of a pump. Due to the intended aging of the parts under load, the measured data also exhibits a concept drift, i.e. a shift in the data distribution. It is of interest to detect anomalous behavior as early as possible to reduce cost, save time and enable accurate root-cause-analysis. We formulate this problem as an anomaly detection task on periodic multivariate time series data. Experiments are evaluated using an open access hydraulic test bench data set by Helwig et al. [1]. The method's performance under concept drift is tested by simulating an aging system using the same data set. We find that Mean Squared Error towards the median in combination with the Modified z-Score is the most robust method for this use case. The solution can be applied from the beginning of a hardware testing cycle. The computations are intuitive to understand, and the classification results can be visualized for better interpretability and plausibility analysis.","PeriodicalId":312498,"journal":{"name":"2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA )","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA )","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETFA45728.2021.9613265","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

This work focuses on computationally efficient difference metrics of time series and compares two different unsupervised methods for anomaly classification. It takes place in the domain of hardware systems testing for reliability, where several structurally identical devices are tested at the same time with a load expected in their lifetime use. The devices perform different maneuvers in predefined testing cycles. It is possible that rare, unexpected system defects appear. They often show up in the measured data signals of the system, for example as a decrease in the output pressure of a pump. Due to the intended aging of the parts under load, the measured data also exhibits a concept drift, i.e. a shift in the data distribution. It is of interest to detect anomalous behavior as early as possible to reduce cost, save time and enable accurate root-cause-analysis. We formulate this problem as an anomaly detection task on periodic multivariate time series data. Experiments are evaluated using an open access hydraulic test bench data set by Helwig et al. [1]. The method's performance under concept drift is tested by simulating an aging system using the same data set. We find that Mean Squared Error towards the median in combination with the Modified z-Score is the most robust method for this use case. The solution can be applied from the beginning of a hardware testing cycle. The computations are intuitive to understand, and the classification results can be visualized for better interpretability and plausibility analysis.
被测液压系统异常检测
本文重点研究了时间序列差分度量的计算效率,并比较了两种不同的无监督异常分类方法。它发生在硬件系统可靠性测试领域,其中几个结构相同的设备在其使用寿命期间的预期负载下同时进行测试。这些设备在预定义的测试周期中执行不同的操作。可能会出现罕见的、意想不到的系统缺陷。它们经常出现在系统的测量数据信号中,例如泵输出压力的降低。由于受载部件的预期老化,测量数据也表现出概念漂移,即数据分布的移位。尽早发现异常行为可以降低成本,节省时间,并实现准确的根本原因分析。我们将此问题表述为一个周期多元时间序列数据的异常检测任务。实验使用Helwig等人[1]的开放获取水力试验台数据集进行评估。通过使用相同的数据集模拟老化系统,验证了该方法在概念漂移下的性能。我们发现,中位数的均方误差与修正z-Score相结合是这个用例中最稳健的方法。该解决方案可以从硬件测试周期的开始应用。计算直观易懂,分类结果可视化,可更好地进行可解释性和合理性分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信