S. Nikolaev, Eddy Banks, P. Barnes, D. Jefferson, Steven G. Smith
{"title":"在分布式ns-3模拟中突破极限:10亿个节点","authors":"S. Nikolaev, Eddy Banks, P. Barnes, D. Jefferson, Steven G. Smith","doi":"10.1145/2756509.2756525","DOIUrl":null,"url":null,"abstract":"In this paper, we describe the results of simulation of very large (up to 109 nodes), planetary-scale networks using ns-3 simulator. The modeled networks consist of the small-world core graph of network routers and an equal number of the leaf nodes (one leaf node per router). Each bidirectional link in the simulation carries on-off traffic. Using LLNL's high-performance computing (HPC) clusters, we conducted strong and weak scaling studies, and investigated on-node scalability for MPI nodes. The scaling relations for both runtime and memory are derived. In addition we examine the packet transmission rate in the simulation and its scalability. Performance of the default ns-3 parallel scheduler is compared to the custom-designed NULL-message scheduler.","PeriodicalId":272891,"journal":{"name":"Proceedings of the 2015 Workshop on ns-3","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Pushing the envelope in distributed ns-3 simulations: one billion nodes\",\"authors\":\"S. Nikolaev, Eddy Banks, P. Barnes, D. Jefferson, Steven G. Smith\",\"doi\":\"10.1145/2756509.2756525\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we describe the results of simulation of very large (up to 109 nodes), planetary-scale networks using ns-3 simulator. The modeled networks consist of the small-world core graph of network routers and an equal number of the leaf nodes (one leaf node per router). Each bidirectional link in the simulation carries on-off traffic. Using LLNL's high-performance computing (HPC) clusters, we conducted strong and weak scaling studies, and investigated on-node scalability for MPI nodes. The scaling relations for both runtime and memory are derived. In addition we examine the packet transmission rate in the simulation and its scalability. Performance of the default ns-3 parallel scheduler is compared to the custom-designed NULL-message scheduler.\",\"PeriodicalId\":272891,\"journal\":{\"name\":\"Proceedings of the 2015 Workshop on ns-3\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2015 Workshop on ns-3\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2756509.2756525\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2015 Workshop on ns-3","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2756509.2756525","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Pushing the envelope in distributed ns-3 simulations: one billion nodes
In this paper, we describe the results of simulation of very large (up to 109 nodes), planetary-scale networks using ns-3 simulator. The modeled networks consist of the small-world core graph of network routers and an equal number of the leaf nodes (one leaf node per router). Each bidirectional link in the simulation carries on-off traffic. Using LLNL's high-performance computing (HPC) clusters, we conducted strong and weak scaling studies, and investigated on-node scalability for MPI nodes. The scaling relations for both runtime and memory are derived. In addition we examine the packet transmission rate in the simulation and its scalability. Performance of the default ns-3 parallel scheduler is compared to the custom-designed NULL-message scheduler.