{"title":"基于实时深度学习的流量分析","authors":"Massimo Gallo, A. Finamore, G. Simon, Dario Rossi","doi":"10.1145/3405837.3411398","DOIUrl":null,"url":null,"abstract":"The increased interest towards Deep Learning (DL) technologies has led to the development of a new generation of specialized hardware accelerator [8] such as Graphic Processing Unit (GPU) and Tensor Processing Unit (TPU) [1, 2]. Although attractive for implementing real-time analytics based traffic engineering fostering the development of self-driving networks [5], the integration of such components in network routers is not trivial. Indeed, routers typically aim to minimize the overhead of per-packet processing (e.g., Ethernet switching, IP forwarding, telemetry) and design choices (e.g., power, memory consumption) to integrate a new accelerator need to factor in these key requirements. Previous works on DL hardware accelerators have overlooked specific router constraints (e.g., strict latency) and focused instead on cloud deployment [4] and image processing. Likewise, there is limited literature regarding DL application on traffic processing at line-rate [6, 9].","PeriodicalId":396272,"journal":{"name":"Proceedings of the SIGCOMM '20 Poster and Demo Sessions","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Real-time deep learning based traffic analytics\",\"authors\":\"Massimo Gallo, A. Finamore, G. Simon, Dario Rossi\",\"doi\":\"10.1145/3405837.3411398\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The increased interest towards Deep Learning (DL) technologies has led to the development of a new generation of specialized hardware accelerator [8] such as Graphic Processing Unit (GPU) and Tensor Processing Unit (TPU) [1, 2]. Although attractive for implementing real-time analytics based traffic engineering fostering the development of self-driving networks [5], the integration of such components in network routers is not trivial. Indeed, routers typically aim to minimize the overhead of per-packet processing (e.g., Ethernet switching, IP forwarding, telemetry) and design choices (e.g., power, memory consumption) to integrate a new accelerator need to factor in these key requirements. Previous works on DL hardware accelerators have overlooked specific router constraints (e.g., strict latency) and focused instead on cloud deployment [4] and image processing. Likewise, there is limited literature regarding DL application on traffic processing at line-rate [6, 9].\",\"PeriodicalId\":396272,\"journal\":{\"name\":\"Proceedings of the SIGCOMM '20 Poster and Demo Sessions\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the SIGCOMM '20 Poster and Demo Sessions\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3405837.3411398\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the SIGCOMM '20 Poster and Demo Sessions","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3405837.3411398","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The increased interest towards Deep Learning (DL) technologies has led to the development of a new generation of specialized hardware accelerator [8] such as Graphic Processing Unit (GPU) and Tensor Processing Unit (TPU) [1, 2]. Although attractive for implementing real-time analytics based traffic engineering fostering the development of self-driving networks [5], the integration of such components in network routers is not trivial. Indeed, routers typically aim to minimize the overhead of per-packet processing (e.g., Ethernet switching, IP forwarding, telemetry) and design choices (e.g., power, memory consumption) to integrate a new accelerator need to factor in these key requirements. Previous works on DL hardware accelerators have overlooked specific router constraints (e.g., strict latency) and focused instead on cloud deployment [4] and image processing. Likewise, there is limited literature regarding DL application on traffic processing at line-rate [6, 9].