Developing Novel Activation Functions in Time Series Anomaly Detection with LSTM Autoencoder

Marina Adriana Mercioni, S. Holban
{"title":"Developing Novel Activation Functions in Time Series Anomaly Detection with LSTM Autoencoder","authors":"Marina Adriana Mercioni, S. Holban","doi":"10.1109/SACI51354.2021.9465604","DOIUrl":null,"url":null,"abstract":"Our proposal consists of developing two novel activation functions in time series anomaly detection, they have the capability to reduce the validation loss. The approach is based on a current activation function in Deep Learning, a very intensive field studied over time, in order to find the most suitable activation in a neural network. In order to achieve this purpose, we used an LSTM (Long Short-Term Memory) Autoencoder architecture, using these two novel functions to see the network’s behavior through introducing them. The key point in our proposal is given by the learnable parameter, assuring more flexibility within the network in weights’ updates, in fact, this property being more powerful than a predefined parameter that will bring a constraint due to its limit. We tested our proposal in comparison to other popular functions such as ReLU (Linear Rectifier Unit), hyperbolic tangent (tanh), Talu activation function. Also, the novelty of this paper consists of taking into consideration of piecewise behavior of an activation function in order to increase the performance of a neural network in Deep Learning.","PeriodicalId":321907,"journal":{"name":"2021 IEEE 15th International Symposium on Applied Computational Intelligence and Informatics (SACI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 15th International Symposium on Applied Computational Intelligence and Informatics (SACI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SACI51354.2021.9465604","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Our proposal consists of developing two novel activation functions in time series anomaly detection, they have the capability to reduce the validation loss. The approach is based on a current activation function in Deep Learning, a very intensive field studied over time, in order to find the most suitable activation in a neural network. In order to achieve this purpose, we used an LSTM (Long Short-Term Memory) Autoencoder architecture, using these two novel functions to see the network’s behavior through introducing them. The key point in our proposal is given by the learnable parameter, assuring more flexibility within the network in weights’ updates, in fact, this property being more powerful than a predefined parameter that will bring a constraint due to its limit. We tested our proposal in comparison to other popular functions such as ReLU (Linear Rectifier Unit), hyperbolic tangent (tanh), Talu activation function. Also, the novelty of this paper consists of taking into consideration of piecewise behavior of an activation function in order to increase the performance of a neural network in Deep Learning.
基于LSTM自编码器的时间序列异常检测新激活函数
我们的建议包括在时间序列异常检测中开发两个新的激活函数,它们具有减少验证损失的能力。该方法基于深度学习中的当前激活函数,这是一个经过长期研究的非常密集的领域,目的是在神经网络中找到最合适的激活。为了达到这个目的,我们使用了LSTM (Long - Short-Term Memory) Autoencoder架构,通过引入这两个新颖的函数来观察网络的行为。我们的建议的关键点是由可学习参数给出的,保证了网络在权重更新时更灵活,事实上,这个属性比预定义的参数更强大,因为它的限制会带来约束。我们将我们的提议与其他流行的功能(如ReLU(线性整流单元)、双曲正切(tanh)、Talu激活函数)进行了比较。此外,本文的新颖之处在于考虑了激活函数的分段行为,以提高深度学习中神经网络的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信