将I/O访问模式检测器的训练卸载到云端

C. Künas, M. Serpa, J. L. Bez, E. Padoin, P. Navaux
{"title":"将I/O访问模式检测器的训练卸载到云端","authors":"C. Künas, M. Serpa, J. L. Bez, E. Padoin, P. Navaux","doi":"10.1109/sbac-padw53941.2021.00013","DOIUrl":null,"url":null,"abstract":"I/O operations are a bottleneck for numerous applications, so optimizing the performance of these operations is of paramount importance. Many techniques explore and apply optimizations to different layers of the I/O stack to improve performance. The difficulty that arises is that the workload changes constantly. So detecting access patterns correctly, at runtime, becomes essential for systems that seek to self-adjust their parameters. Furthermore, the I/O pattern detection techniques should represent minimal overhead and should be able to perform detection as quickly as possible. This paper approaches a machine learning technique for detecting the I/O access patterns and proposes offloading the local training workload to the cloud using a TPU accelerator. Such an approach does not interfere with classifier accuracy (reaching up to 99% accuracy). Still, it allows the training to be asynchronous, enabling the local machine to allocate its computing resources to scientific applications while the model is trained or updated in the cloud.","PeriodicalId":233108,"journal":{"name":"2021 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Offloading the Training of an I/O Access Pattern Detector to the Cloud\",\"authors\":\"C. Künas, M. Serpa, J. L. Bez, E. Padoin, P. Navaux\",\"doi\":\"10.1109/sbac-padw53941.2021.00013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"I/O operations are a bottleneck for numerous applications, so optimizing the performance of these operations is of paramount importance. Many techniques explore and apply optimizations to different layers of the I/O stack to improve performance. The difficulty that arises is that the workload changes constantly. So detecting access patterns correctly, at runtime, becomes essential for systems that seek to self-adjust their parameters. Furthermore, the I/O pattern detection techniques should represent minimal overhead and should be able to perform detection as quickly as possible. This paper approaches a machine learning technique for detecting the I/O access patterns and proposes offloading the local training workload to the cloud using a TPU accelerator. Such an approach does not interfere with classifier accuracy (reaching up to 99% accuracy). Still, it allows the training to be asynchronous, enabling the local machine to allocate its computing resources to scientific applications while the model is trained or updated in the cloud.\",\"PeriodicalId\":233108,\"journal\":{\"name\":\"2021 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW)\",\"volume\":\"53 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/sbac-padw53941.2021.00013\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/sbac-padw53941.2021.00013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

I/O操作是许多应用程序的瓶颈,因此优化这些操作的性能至关重要。许多技术对I/O堆栈的不同层进行探索和应用优化,以提高性能。出现的困难是工作量不断变化。因此,在运行时正确检测访问模式对于寻求自我调整其参数的系统至关重要。此外,I/O模式检测技术应该表示最小的开销,并且应该能够尽可能快地执行检测。本文探讨了一种用于检测I/O访问模式的机器学习技术,并建议使用TPU加速器将本地训练工作量卸载到云端。这种方法不会干扰分类器的准确率(达到99%的准确率)。尽管如此,它允许异步训练,使本地机器能够将其计算资源分配给科学应用程序,而模型在云中进行训练或更新。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Offloading the Training of an I/O Access Pattern Detector to the Cloud
I/O operations are a bottleneck for numerous applications, so optimizing the performance of these operations is of paramount importance. Many techniques explore and apply optimizations to different layers of the I/O stack to improve performance. The difficulty that arises is that the workload changes constantly. So detecting access patterns correctly, at runtime, becomes essential for systems that seek to self-adjust their parameters. Furthermore, the I/O pattern detection techniques should represent minimal overhead and should be able to perform detection as quickly as possible. This paper approaches a machine learning technique for detecting the I/O access patterns and proposes offloading the local training workload to the cloud using a TPU accelerator. Such an approach does not interfere with classifier accuracy (reaching up to 99% accuracy). Still, it allows the training to be asynchronous, enabling the local machine to allocate its computing resources to scientific applications while the model is trained or updated in the cloud.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信