基于可逆神经网络的推理管道异常检测

Malgorzata Schwab, Ashis Biswas
{"title":"基于可逆神经网络的推理管道异常检测","authors":"Malgorzata Schwab, Ashis Biswas","doi":"10.5121/ijnsa.2023.15501","DOIUrl":null,"url":null,"abstract":"This study combines research in machine learning and system engineering practices to conceptualize a paradigm-enhancing trustworthiness of a machine learning inference pipeline. We explore the topic of reversibility in deep neural networks and introduce its anomaly detection capabilities to build a framework of integrity verification checkpoints across the inference pipeline of a deployed model. We leverage previous findings and principles regarding several types of autoencoders, deep generative maximumlikelihood training and invertibility of neural networks to propose an improved network architecture for anomaly detection. We hypothesize and experimentally confirm that an Invertible Neural Network (INN) trained as a convolutional autoencoder is a superior alternative naturally suited to solve that task. This remarkable INN’s ability to reconstruct data from its compressed representation and to solve inverse problems is then generalized and applied in the field of Trustworthy AI to achieve integrity verification of an inference pipeline through the concept of an INN-based Trusted Neural Network (TNN) nodes placed around the mission critical parts of the system, as well as the end-to-end outcome verification. This work aspires to enhance robustness and reliability of applications employing artificial intelligence, which are playing increasingly noticeable role in highly consequential decision-making processes across many industries and problem domains. INNs are invertible by construction and tractably trained simultaneously in both directions. This feature has untapped potential to improve the explainability of machine learning pipelines in support of their trustworthiness and is a topic of our current studies.","PeriodicalId":93303,"journal":{"name":"International journal of network security & its applications","volume":"140 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Invertible Neural Network for Inference Pipeline Anomaly Detection\",\"authors\":\"Malgorzata Schwab, Ashis Biswas\",\"doi\":\"10.5121/ijnsa.2023.15501\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study combines research in machine learning and system engineering practices to conceptualize a paradigm-enhancing trustworthiness of a machine learning inference pipeline. We explore the topic of reversibility in deep neural networks and introduce its anomaly detection capabilities to build a framework of integrity verification checkpoints across the inference pipeline of a deployed model. We leverage previous findings and principles regarding several types of autoencoders, deep generative maximumlikelihood training and invertibility of neural networks to propose an improved network architecture for anomaly detection. We hypothesize and experimentally confirm that an Invertible Neural Network (INN) trained as a convolutional autoencoder is a superior alternative naturally suited to solve that task. This remarkable INN’s ability to reconstruct data from its compressed representation and to solve inverse problems is then generalized and applied in the field of Trustworthy AI to achieve integrity verification of an inference pipeline through the concept of an INN-based Trusted Neural Network (TNN) nodes placed around the mission critical parts of the system, as well as the end-to-end outcome verification. This work aspires to enhance robustness and reliability of applications employing artificial intelligence, which are playing increasingly noticeable role in highly consequential decision-making processes across many industries and problem domains. INNs are invertible by construction and tractably trained simultaneously in both directions. This feature has untapped potential to improve the explainability of machine learning pipelines in support of their trustworthiness and is a topic of our current studies.\",\"PeriodicalId\":93303,\"journal\":{\"name\":\"International journal of network security & its applications\",\"volume\":\"140 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International journal of network security & its applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5121/ijnsa.2023.15501\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of network security & its applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/ijnsa.2023.15501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本研究结合机器学习和系统工程实践的研究,概念化了一个范式增强的机器学习推理管道的可信度。我们探讨了深度神经网络中的可逆性主题,并引入了其异常检测功能,以便在部署模型的推理管道上构建完整性验证检查点框架。我们利用先前关于几种类型的自动编码器、深度生成最大似然训练和神经网络可逆性的发现和原则,提出了一种改进的异常检测网络架构。我们假设并通过实验证实,作为卷积自编码器训练的可逆神经网络(INN)是解决该任务的最佳选择。这种非凡的INN从其压缩表示中重构数据并解决逆问题的能力随后被推广并应用于可信人工智能领域,通过放置在系统关键任务部分周围的基于INN的可信神经网络(TNN)节点的概念,以及端到端结果验证,实现推理管道的完整性验证。这项工作旨在提高应用程序的鲁棒性和可靠性,人工智能在许多行业和问题领域的高度重要的决策过程中发挥着越来越显著的作用。INNs的构造是可逆的,并且可以在两个方向上同时训练。该特性在提高机器学习管道的可解释性以支持其可信度方面具有未开发的潜力,并且是我们当前研究的主题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Invertible Neural Network for Inference Pipeline Anomaly Detection
This study combines research in machine learning and system engineering practices to conceptualize a paradigm-enhancing trustworthiness of a machine learning inference pipeline. We explore the topic of reversibility in deep neural networks and introduce its anomaly detection capabilities to build a framework of integrity verification checkpoints across the inference pipeline of a deployed model. We leverage previous findings and principles regarding several types of autoencoders, deep generative maximumlikelihood training and invertibility of neural networks to propose an improved network architecture for anomaly detection. We hypothesize and experimentally confirm that an Invertible Neural Network (INN) trained as a convolutional autoencoder is a superior alternative naturally suited to solve that task. This remarkable INN’s ability to reconstruct data from its compressed representation and to solve inverse problems is then generalized and applied in the field of Trustworthy AI to achieve integrity verification of an inference pipeline through the concept of an INN-based Trusted Neural Network (TNN) nodes placed around the mission critical parts of the system, as well as the end-to-end outcome verification. This work aspires to enhance robustness and reliability of applications employing artificial intelligence, which are playing increasingly noticeable role in highly consequential decision-making processes across many industries and problem domains. INNs are invertible by construction and tractably trained simultaneously in both directions. This feature has untapped potential to improve the explainability of machine learning pipelines in support of their trustworthiness and is a topic of our current studies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信