TeeBench:可信执行环境中的无缝基准测试

K. Maliszewski, Tilman Dietzel, Jorge-Arnulfo Quiané-Ruiz, V. Markl
{"title":"TeeBench:可信执行环境中的无缝基准测试","authors":"K. Maliszewski, Tilman Dietzel, Jorge-Arnulfo Quiané-Ruiz, V. Markl","doi":"10.1145/3555041.3589726","DOIUrl":null,"url":null,"abstract":"Trusted Execution Environments (TEEs) have enabled building secure systems that operate on untrusted machines. However, TEEs' architecture questions previous performance findings. The existing relational algorithms have been designed for traditional CPUs. Prior work has shown that these algorithms underperform in TEEs and, in most cases, can not be easily reused. Moreover, they frequently used benchmarks pertinent to CPUs and ignored TEE-specific metrics essential to understand the performance differences. Therefore, there is a need for a fair benchmarking approach for TEE algorithms. In this demonstration, we showcase TeeBench, a unified benchmarking framework for relational operators across TEEs. TeeBench focuses on TEE-specific hardware metrics. It enables a comprehensive performance analysis that helps researchers to evaluate their advances. It comes with an interactive web browser tool that allows the users to upload their implementation of a relational algorithm and seamlessly benchmark it across different TEEs. In addition, it introduces a novel TEE-Analyzer that hints the users about performance bottlenecks and suggests possible code improvements. Users receive instant feedback if changes to their algorithm improve the performance through an interactive, human-friendly web interface. We expect TeeBench to encourage the usage of TEEs and to advance the study of privacy-preserving systems.","PeriodicalId":161812,"journal":{"name":"Companion of the 2023 International Conference on Management of Data","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"TeeBench: Seamless Benchmarking in Trusted Execution Environments\",\"authors\":\"K. Maliszewski, Tilman Dietzel, Jorge-Arnulfo Quiané-Ruiz, V. Markl\",\"doi\":\"10.1145/3555041.3589726\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Trusted Execution Environments (TEEs) have enabled building secure systems that operate on untrusted machines. However, TEEs' architecture questions previous performance findings. The existing relational algorithms have been designed for traditional CPUs. Prior work has shown that these algorithms underperform in TEEs and, in most cases, can not be easily reused. Moreover, they frequently used benchmarks pertinent to CPUs and ignored TEE-specific metrics essential to understand the performance differences. Therefore, there is a need for a fair benchmarking approach for TEE algorithms. In this demonstration, we showcase TeeBench, a unified benchmarking framework for relational operators across TEEs. TeeBench focuses on TEE-specific hardware metrics. It enables a comprehensive performance analysis that helps researchers to evaluate their advances. It comes with an interactive web browser tool that allows the users to upload their implementation of a relational algorithm and seamlessly benchmark it across different TEEs. In addition, it introduces a novel TEE-Analyzer that hints the users about performance bottlenecks and suggests possible code improvements. Users receive instant feedback if changes to their algorithm improve the performance through an interactive, human-friendly web interface. We expect TeeBench to encourage the usage of TEEs and to advance the study of privacy-preserving systems.\",\"PeriodicalId\":161812,\"journal\":{\"name\":\"Companion of the 2023 International Conference on Management of Data\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Companion of the 2023 International Conference on Management of Data\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3555041.3589726\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion of the 2023 International Conference on Management of Data","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3555041.3589726","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

可信执行环境(tee)支持构建在不受信任的机器上运行的安全系统。然而,TEEs的架构对之前的性能发现提出了质疑。现有的关系算法都是针对传统的cpu设计的。先前的工作表明,这些算法在tee中表现不佳,而且在大多数情况下,不能轻易重用。此外,他们经常使用与cpu相关的基准测试,而忽略了理解性能差异所必需的特定于tee的指标。因此,有必要对TEE算法进行公平的基准测试。在这个演示中,我们将展示TeeBench,一个跨tee关系操作符的统一基准测试框架。TeeBench专注于tee特定的硬件指标。它可以进行全面的性能分析,帮助研究人员评估他们的进展。它附带了一个交互式web浏览器工具,允许用户上传他们的关系算法实现,并在不同tee之间无缝地对其进行基准测试。此外,它还引入了一种新的TEE-Analyzer,可以提示用户有关性能瓶颈并建议可能的代码改进。如果他们的算法通过一个交互式的、人性化的web界面改进了性能,用户就会收到即时反馈。我们希望TeeBench能够鼓励tee的使用,并推进隐私保护系统的研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TeeBench: Seamless Benchmarking in Trusted Execution Environments
Trusted Execution Environments (TEEs) have enabled building secure systems that operate on untrusted machines. However, TEEs' architecture questions previous performance findings. The existing relational algorithms have been designed for traditional CPUs. Prior work has shown that these algorithms underperform in TEEs and, in most cases, can not be easily reused. Moreover, they frequently used benchmarks pertinent to CPUs and ignored TEE-specific metrics essential to understand the performance differences. Therefore, there is a need for a fair benchmarking approach for TEE algorithms. In this demonstration, we showcase TeeBench, a unified benchmarking framework for relational operators across TEEs. TeeBench focuses on TEE-specific hardware metrics. It enables a comprehensive performance analysis that helps researchers to evaluate their advances. It comes with an interactive web browser tool that allows the users to upload their implementation of a relational algorithm and seamlessly benchmark it across different TEEs. In addition, it introduces a novel TEE-Analyzer that hints the users about performance bottlenecks and suggests possible code improvements. Users receive instant feedback if changes to their algorithm improve the performance through an interactive, human-friendly web interface. We expect TeeBench to encourage the usage of TEEs and to advance the study of privacy-preserving systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信