Enabling Android NNAPI Flow for TVM Runtime

Ming-Yi Lai, Chia-Yu Sung, Jenq-Kuen Lee, Ming-Yu Hung
{"title":"Enabling Android NNAPI Flow for TVM Runtime","authors":"Ming-Yi Lai, Chia-Yu Sung, Jenq-Kuen Lee, Ming-Yu Hung","doi":"10.1145/3409390.3409393","DOIUrl":null,"url":null,"abstract":"With machine learning on the rise, mobile platforms are striving to offer inference acceleration on edge devices so that related applications can achieve satisfiable performance. With this background, this work aims at interfacing inference on Android with TVM, an inference-focusing compiler for machine learning, and NNAPI, the official neural network API provided by Android. This work presents a flow to integrate NNAPI into TVM-generated inference model with a partition algorithm to determine which parts of the model should be computed on NNAPI and which should not. Conducted experiments show that properly partitioned models can achieve significant speedup using NNAPI when compared to pure TVM-generated CPU inference. In addition, our enable flow potentially benefits both frameworks by allowing them to leverage each other in AI model deployments.","PeriodicalId":350506,"journal":{"name":"Workshop Proceedings of the 49th International Conference on Parallel Processing","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Workshop Proceedings of the 49th International Conference on Parallel Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3409390.3409393","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

With machine learning on the rise, mobile platforms are striving to offer inference acceleration on edge devices so that related applications can achieve satisfiable performance. With this background, this work aims at interfacing inference on Android with TVM, an inference-focusing compiler for machine learning, and NNAPI, the official neural network API provided by Android. This work presents a flow to integrate NNAPI into TVM-generated inference model with a partition algorithm to determine which parts of the model should be computed on NNAPI and which should not. Conducted experiments show that properly partitioned models can achieve significant speedup using NNAPI when compared to pure TVM-generated CPU inference. In addition, our enable flow potentially benefits both frameworks by allowing them to leverage each other in AI model deployments.
为TVM运行时启用Android NNAPI流
随着机器学习的兴起,移动平台正在努力在边缘设备上提供推理加速,以便相关应用程序能够获得令人满意的性能。在此背景下,本工作旨在将Android上的推理与TVM(一种专注于机器学习的推理编译器)和NNAPI (Android提供的官方神经网络API)进行接口。这项工作提出了一个将NNAPI集成到tvm生成的推理模型中的流程,该模型使用分区算法来确定模型的哪些部分应该在NNAPI上计算,哪些不应该。实验表明,与纯tvm生成的CPU推理相比,使用NNAPI进行适当分区的模型可以获得显着的加速。此外,我们的启用流程允许两个框架在AI模型部署中相互利用,从而潜在地使它们受益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信