{"title":"通过迁移学习学习迁移","authors":"Md. Arifuzzaman, Engin Arslan","doi":"10.1109/indis54524.2021.00009","DOIUrl":null,"url":null,"abstract":"Detecting performance anomalies is key to efficiently utilize network resources and improve the quality of service. Researchers proposed various approaches to identify the presence of anomalies by analyzing performance statistics using heuristic (e.g., change point detection) and Machine Learning (ML) models. Although these models yield high accuracy in the networks that they are trained for, their performance degrade severely when transferred to different network settings. This is because of the fact that existing models detect anomalies by capturing the changes in transfer throughput and observed RTT values, which are dependent to network settings. In this paper, we propose a novel feature transformation method to eliminate network dependence of ML models for anomaly diagnosis problems to enhance their performance when transferred to new networks (aka transfer learning) thereby mitigating the need to gather training data in each network separately. We validate the findings through experimental evaluations conducted on simulated and production networks and show that the proposed feature transformation improves the performance of transfer learning for anomaly diagnosis problems from less than 60% to over 90%. Finally, we evaluate the performance of the proposed solutions using various congestion control algorithm and observe that the models trained using BBR attains the best transfer learning performance compared to Cubic and HTCP.","PeriodicalId":351712,"journal":{"name":"2021 IEEE Workshop on Innovating the Network for Data-Intensive Science (INDIS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Learning Transfers via Transfer Learning\",\"authors\":\"Md. Arifuzzaman, Engin Arslan\",\"doi\":\"10.1109/indis54524.2021.00009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Detecting performance anomalies is key to efficiently utilize network resources and improve the quality of service. Researchers proposed various approaches to identify the presence of anomalies by analyzing performance statistics using heuristic (e.g., change point detection) and Machine Learning (ML) models. Although these models yield high accuracy in the networks that they are trained for, their performance degrade severely when transferred to different network settings. This is because of the fact that existing models detect anomalies by capturing the changes in transfer throughput and observed RTT values, which are dependent to network settings. In this paper, we propose a novel feature transformation method to eliminate network dependence of ML models for anomaly diagnosis problems to enhance their performance when transferred to new networks (aka transfer learning) thereby mitigating the need to gather training data in each network separately. We validate the findings through experimental evaluations conducted on simulated and production networks and show that the proposed feature transformation improves the performance of transfer learning for anomaly diagnosis problems from less than 60% to over 90%. Finally, we evaluate the performance of the proposed solutions using various congestion control algorithm and observe that the models trained using BBR attains the best transfer learning performance compared to Cubic and HTCP.\",\"PeriodicalId\":351712,\"journal\":{\"name\":\"2021 IEEE Workshop on Innovating the Network for Data-Intensive Science (INDIS)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Workshop on Innovating the Network for Data-Intensive Science (INDIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/indis54524.2021.00009\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Workshop on Innovating the Network for Data-Intensive Science (INDIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/indis54524.2021.00009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Detecting performance anomalies is key to efficiently utilize network resources and improve the quality of service. Researchers proposed various approaches to identify the presence of anomalies by analyzing performance statistics using heuristic (e.g., change point detection) and Machine Learning (ML) models. Although these models yield high accuracy in the networks that they are trained for, their performance degrade severely when transferred to different network settings. This is because of the fact that existing models detect anomalies by capturing the changes in transfer throughput and observed RTT values, which are dependent to network settings. In this paper, we propose a novel feature transformation method to eliminate network dependence of ML models for anomaly diagnosis problems to enhance their performance when transferred to new networks (aka transfer learning) thereby mitigating the need to gather training data in each network separately. We validate the findings through experimental evaluations conducted on simulated and production networks and show that the proposed feature transformation improves the performance of transfer learning for anomaly diagnosis problems from less than 60% to over 90%. Finally, we evaluate the performance of the proposed solutions using various congestion control algorithm and observe that the models trained using BBR attains the best transfer learning performance compared to Cubic and HTCP.