自编码器隐空间噪声注入改进代理模型预测

Michele Lazzara, Max Chevalier, Jasone Garay–Garcia, C. Lapeyre, O. Teste
{"title":"自编码器隐空间噪声注入改进代理模型预测","authors":"Michele Lazzara, Max Chevalier, Jasone Garay–Garcia, C. Lapeyre, O. Teste","doi":"10.1109/ICTAI56018.2022.00085","DOIUrl":null,"url":null,"abstract":"Autoencoders (AEs) represent a powerful tool for enhancing data-driven surrogate modeling by learning a lower-dimensional representation of high-dimensional data in an encoding-reconstructing fashion. Variational autoencoders (VAEs) improve interpolation capabilities of autoencoders by structuring the latent space with the Kullback-Liebler regularization term. However, learning a VAE poses practical challenges due to the difficulties on balancing the quality of prediction and the interpolation capability. Thus, a compromise between AEs and VAEs is needed to deliver robust predictive models. In this paper, an effective strategy, consisting on the injection of noise into the latent space of AEs, is proposed to improve the smoothness of the latent space of autoencoders while preserving the quality of reconstruction. The experimental results show that the model with the proposed noise injection technique outperforms AEs, VAEs and other alternatives in terms of quality of predictions.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving Surrogate Model Prediction by Noise Injection into Autoencoder Latent Space\",\"authors\":\"Michele Lazzara, Max Chevalier, Jasone Garay–Garcia, C. Lapeyre, O. Teste\",\"doi\":\"10.1109/ICTAI56018.2022.00085\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autoencoders (AEs) represent a powerful tool for enhancing data-driven surrogate modeling by learning a lower-dimensional representation of high-dimensional data in an encoding-reconstructing fashion. Variational autoencoders (VAEs) improve interpolation capabilities of autoencoders by structuring the latent space with the Kullback-Liebler regularization term. However, learning a VAE poses practical challenges due to the difficulties on balancing the quality of prediction and the interpolation capability. Thus, a compromise between AEs and VAEs is needed to deliver robust predictive models. In this paper, an effective strategy, consisting on the injection of noise into the latent space of AEs, is proposed to improve the smoothness of the latent space of autoencoders while preserving the quality of reconstruction. The experimental results show that the model with the proposed noise injection technique outperforms AEs, VAEs and other alternatives in terms of quality of predictions.\",\"PeriodicalId\":354314,\"journal\":{\"name\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"52 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI56018.2022.00085\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

自动编码器(ae)是一种强大的工具,可以通过以编码重建的方式学习高维数据的低维表示来增强数据驱动的代理建模。变分自编码器(VAEs)通过利用Kullback-Liebler正则化项构造潜在空间,提高了自编码器的插值能力。然而,由于难以平衡预测质量和插值能力,学习VAE带来了实际的挑战。因此,需要在ae和VAEs之间进行折衷,以提供健壮的预测模型。本文提出了一种有效的策略,即在隐空间中注入噪声,以提高自编码器隐空间的平滑度,同时保持重建质量。实验结果表明,采用噪声注入技术的模型在预测质量方面优于ae、VAEs和其他替代方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improving Surrogate Model Prediction by Noise Injection into Autoencoder Latent Space
Autoencoders (AEs) represent a powerful tool for enhancing data-driven surrogate modeling by learning a lower-dimensional representation of high-dimensional data in an encoding-reconstructing fashion. Variational autoencoders (VAEs) improve interpolation capabilities of autoencoders by structuring the latent space with the Kullback-Liebler regularization term. However, learning a VAE poses practical challenges due to the difficulties on balancing the quality of prediction and the interpolation capability. Thus, a compromise between AEs and VAEs is needed to deliver robust predictive models. In this paper, an effective strategy, consisting on the injection of noise into the latent space of AEs, is proposed to improve the smoothness of the latent space of autoencoders while preserving the quality of reconstruction. The experimental results show that the model with the proposed noise injection technique outperforms AEs, VAEs and other alternatives in terms of quality of predictions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信