MetaForecast: Harnessing Model-Agnostic Meta-Learning Approach to Predict Key Metrics of Interconnected Network Topologies

Shruti Jadon, Aryan Jadon
{"title":"MetaForecast: Harnessing Model-Agnostic Meta-Learning Approach to Predict Key Metrics of Interconnected Network Topologies","authors":"Shruti Jadon, Aryan Jadon","doi":"10.1109/IAICT59002.2023.10205730","DOIUrl":null,"url":null,"abstract":"Meta-learning, an approach in machine learning that focuses on “learning how to learn prioritizes generalization over specialization, mirroring the human’s ability to derive generalizations from experiences and specialize when tasks are repeated. Training a meta-model requires the procurement of similar tasks or similar data distribution. In our study, we explored a model-agnostic meta-learning approach to predict telemetry data collected from a network of devices. We also proposed a custom architecture “MetaForecast” wherein a meta-learner learns the generalized intricacies of each site/device’s data, allowing us to fine-tune the base learner and create site/device-specific models. Based on our experiments, we have observed that by using MetaForecast in such complex telemetry system we can:1)Significantly reducing the training time of a forecasting model for newly added devices/sites: Our proposed approach enables fine-tuning to a site-specific model within a small number (less than 10) of epochs.2)Minimizing data gathering requirements: By requiring fewer epochs for model tuning, our approach greatly reduces the data gathering needs. Hence, a new site doesn’t necessitate extensive historical data beyond a few recent entries based on granularity.3)Enabling day 1 prediction: We assert that if a new site/device is added, the new model can be trained within a few epochs and doesn’t rely on a large amount of past data for training. To establish a baseline, we compared the performance of our MAML-inspired architecture to individual models per site and transfer learning. Our findings revealed approximately a ~ 10% reduction in mean squared error, a ~ 50% reduction in computing resources, and a ~ 65% reduction in data gathering requirements. Based on our comprehensive research, we assert that the integration of meta-learning techniques and our proposed architecture yields notable improvements in forecasting accuracy, accompanied by substantial reductions in training time, data requirements, and computing resources.","PeriodicalId":339796,"journal":{"name":"2023 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAICT59002.2023.10205730","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Meta-learning, an approach in machine learning that focuses on “learning how to learn prioritizes generalization over specialization, mirroring the human’s ability to derive generalizations from experiences and specialize when tasks are repeated. Training a meta-model requires the procurement of similar tasks or similar data distribution. In our study, we explored a model-agnostic meta-learning approach to predict telemetry data collected from a network of devices. We also proposed a custom architecture “MetaForecast” wherein a meta-learner learns the generalized intricacies of each site/device’s data, allowing us to fine-tune the base learner and create site/device-specific models. Based on our experiments, we have observed that by using MetaForecast in such complex telemetry system we can:1)Significantly reducing the training time of a forecasting model for newly added devices/sites: Our proposed approach enables fine-tuning to a site-specific model within a small number (less than 10) of epochs.2)Minimizing data gathering requirements: By requiring fewer epochs for model tuning, our approach greatly reduces the data gathering needs. Hence, a new site doesn’t necessitate extensive historical data beyond a few recent entries based on granularity.3)Enabling day 1 prediction: We assert that if a new site/device is added, the new model can be trained within a few epochs and doesn’t rely on a large amount of past data for training. To establish a baseline, we compared the performance of our MAML-inspired architecture to individual models per site and transfer learning. Our findings revealed approximately a ~ 10% reduction in mean squared error, a ~ 50% reduction in computing resources, and a ~ 65% reduction in data gathering requirements. Based on our comprehensive research, we assert that the integration of meta-learning techniques and our proposed architecture yields notable improvements in forecasting accuracy, accompanied by substantial reductions in training time, data requirements, and computing resources.
元预测:利用模型不可知的元学习方法来预测互联网络拓扑的关键指标
元学习是机器学习中的一种方法,专注于“学习如何学习”,将泛化优先于专业化,反映了人类从经验中得出泛化的能力,以及在重复任务时进行专业化的能力。训练元模型需要获取类似的任务或类似的数据分布。在我们的研究中,我们探索了一种与模型无关的元学习方法来预测从设备网络收集的遥测数据。我们还提出了一个自定义架构“MetaForecast”,其中元学习器学习每个站点/设备数据的广义复杂性,允许我们微调基本学习器并创建特定于站点/设备的模型。根据我们的实验,我们观察到,通过在这样复杂的遥测系统中使用MetaForecast,我们可以:1)显着减少新增加的设备/站点的预测模型的训练时间:我们提出的方法可以在少量(少于10)个epoch内对特定站点的模型进行微调。2)最小化数据收集要求:通过需要更少的epoch进行模型调整,我们的方法大大减少了数据收集需求。因此,除了基于粒度的几个最近条目之外,一个新站点不需要大量的历史数据。3)启用第一天预测:我们断言,如果添加了一个新的站点/设备,新模型可以在几个时代内进行训练,并且不依赖于大量的过去数据进行训练。为了建立基线,我们将受mml启发的体系结构的性能与每个站点和迁移学习的单个模型进行了比较。我们的研究结果表明,均方误差减少了约10%,计算资源减少了约50%,数据收集需求减少了约65%。基于我们的综合研究,我们断言元学习技术和我们提出的体系结构的集成在预测准确性方面产生了显著的改进,同时大幅度减少了训练时间、数据需求和计算资源。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书