Preferred Benchmarking Criteria for Systematic Taxonomy of Embedded Platforms (STEP) in Human System Interaction Systems

A. Kwaśniewska, Sharath Raghava, Carlos Davila, Mikael Sevenier, D. Gamba, J. Rumiński
{"title":"Preferred Benchmarking Criteria for Systematic Taxonomy of Embedded Platforms (STEP) in Human System Interaction Systems","authors":"A. Kwaśniewska, Sharath Raghava, Carlos Davila, Mikael Sevenier, D. Gamba, J. Rumiński","doi":"10.1109/HSI55341.2022.9869470","DOIUrl":null,"url":null,"abstract":"The rate of progress in the field of Artificial Intelligence (AI) and Machine Learning (ML) has significantly increased over the past ten years and continues to accelerate. Since then, AI has made the leap from research case studies to real production ready applications. The significance of this growth cannot be undermined as it catalyzed the very nature of computing. Conventional platforms struggle to achieve greater performance and efficiency, what causes a surging demand for innovative AI accelerators, specialized platforms and purpose-built computes. At the same time, it is required to provide solutions for assessment of ML platform performance in a reproducible and unbiased manner to be able to provide a fair comparison of different products. This is especially valid for Human System Interaction (HSI) systems that require specific data handling for low latency responses in emergency situations or to improve user experience, as well as for preserving data privacy and security by processing it locally. Taking it into account, this work presents a comprehensive guideline on preferred benchmarking criteria for evaluation of ML platforms that include both lower level analysis of ML models and system-level evaluation of the entire pipeline. In addition, we propose a Systematic Taxonomy of Embedded Platforms (STEP) that can be used by the community and customers for better selection of specific ML hardware consistent with their needs for better design of ML-based HSI solutions.","PeriodicalId":282607,"journal":{"name":"2022 15th International Conference on Human System Interaction (HSI)","volume":"224 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 15th International Conference on Human System Interaction (HSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HSI55341.2022.9869470","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The rate of progress in the field of Artificial Intelligence (AI) and Machine Learning (ML) has significantly increased over the past ten years and continues to accelerate. Since then, AI has made the leap from research case studies to real production ready applications. The significance of this growth cannot be undermined as it catalyzed the very nature of computing. Conventional platforms struggle to achieve greater performance and efficiency, what causes a surging demand for innovative AI accelerators, specialized platforms and purpose-built computes. At the same time, it is required to provide solutions for assessment of ML platform performance in a reproducible and unbiased manner to be able to provide a fair comparison of different products. This is especially valid for Human System Interaction (HSI) systems that require specific data handling for low latency responses in emergency situations or to improve user experience, as well as for preserving data privacy and security by processing it locally. Taking it into account, this work presents a comprehensive guideline on preferred benchmarking criteria for evaluation of ML platforms that include both lower level analysis of ML models and system-level evaluation of the entire pipeline. In addition, we propose a Systematic Taxonomy of Embedded Platforms (STEP) that can be used by the community and customers for better selection of specific ML hardware consistent with their needs for better design of ML-based HSI solutions.
人机交互系统中嵌入式平台系统分类(STEP)的首选基准标准
人工智能(AI)和机器学习(ML)领域的进展速度在过去十年中显著增加,并继续加速。从那时起,人工智能已经从研究案例研究飞跃到实际生产就绪的应用程序。这种增长的重要性不容忽视,因为它催化了计算的本质。传统平台难以实现更高的性能和效率,这导致对创新人工智能加速器、专业平台和专用计算机的需求激增。同时,需要提供可重复、无偏倚的ML平台性能评估解决方案,能够对不同产品进行公平的比较。这对于人类系统交互(HSI)系统尤其有效,这些系统需要特定的数据处理,以便在紧急情况下进行低延迟响应或改善用户体验,以及通过本地处理来保护数据隐私和安全性。考虑到这一点,这项工作提出了一个关于ML平台评估首选基准标准的综合指南,包括ML模型的低级分析和整个管道的系统级评估。此外,我们提出了一个嵌入式平台的系统分类法(STEP),社区和客户可以使用它来更好地选择特定的机器学习硬件,以满足他们更好地设计基于机器学习的HSI解决方案的需求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信