None Eliyahu Sason, None Yackov Lubarsky, None Alexei Gaissinski, None Eli Kravchik, None Pavel Kisilev
{"title":"基于oracle的数据生成,用于高效的数字孪生网络训练","authors":"None Eliyahu Sason, None Yackov Lubarsky, None Alexei Gaissinski, None Eli Kravchik, None Pavel Kisilev","doi":"10.52953/aweu6345","DOIUrl":null,"url":null,"abstract":"Recent advances in Graph Neural Networks (GNNs) has opened new capabilities to analyze complex communication systems. However, little work has been done to study the effects of limited data samples on the performance of GNN-based systems. In this paper, we present a novel solution to the problem of finding an optimal training set for efficient training of a RouteNet-Fermi GNN model. The proposed solution ensures good model generalization to large previously unseen networks under strict limitations on the training data budget and training topology sizes. Specifically, we generate an initial data set by emulating the flow distribution of large networks while using small networks. We then deploy a new clustering method that efficiently samples the above generated data set by analyzing the data embeddings from different Oracle models. This procedure provides a very small but information-rich training set. The above data embedding method translates highly heterogeneous network samples into a common embedding spac, wherein the samples can be easily related to each other. The proposed method outperforms state-of-the-art approaches, including the winning solutions of the 2022 Graph Neural Networking challenge.","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Oracle-based data generation for highly efficient digital twin network training\",\"authors\":\"None Eliyahu Sason, None Yackov Lubarsky, None Alexei Gaissinski, None Eli Kravchik, None Pavel Kisilev\",\"doi\":\"10.52953/aweu6345\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advances in Graph Neural Networks (GNNs) has opened new capabilities to analyze complex communication systems. However, little work has been done to study the effects of limited data samples on the performance of GNN-based systems. In this paper, we present a novel solution to the problem of finding an optimal training set for efficient training of a RouteNet-Fermi GNN model. The proposed solution ensures good model generalization to large previously unseen networks under strict limitations on the training data budget and training topology sizes. Specifically, we generate an initial data set by emulating the flow distribution of large networks while using small networks. We then deploy a new clustering method that efficiently samples the above generated data set by analyzing the data embeddings from different Oracle models. This procedure provides a very small but information-rich training set. The above data embedding method translates highly heterogeneous network samples into a common embedding spac, wherein the samples can be easily related to each other. The proposed method outperforms state-of-the-art approaches, including the winning solutions of the 2022 Graph Neural Networking challenge.\",\"PeriodicalId\":93013,\"journal\":{\"name\":\"ITU journal : ICT discoveries\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ITU journal : ICT discoveries\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.52953/aweu6345\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ITU journal : ICT discoveries","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52953/aweu6345","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Oracle-based data generation for highly efficient digital twin network training
Recent advances in Graph Neural Networks (GNNs) has opened new capabilities to analyze complex communication systems. However, little work has been done to study the effects of limited data samples on the performance of GNN-based systems. In this paper, we present a novel solution to the problem of finding an optimal training set for efficient training of a RouteNet-Fermi GNN model. The proposed solution ensures good model generalization to large previously unseen networks under strict limitations on the training data budget and training topology sizes. Specifically, we generate an initial data set by emulating the flow distribution of large networks while using small networks. We then deploy a new clustering method that efficiently samples the above generated data set by analyzing the data embeddings from different Oracle models. This procedure provides a very small but information-rich training set. The above data embedding method translates highly heterogeneous network samples into a common embedding spac, wherein the samples can be easily related to each other. The proposed method outperforms state-of-the-art approaches, including the winning solutions of the 2022 Graph Neural Networking challenge.