Inference for Trustworthy Machine Intelligence: Challenges and Solutions

D. Verma
{"title":"Inference for Trustworthy Machine Intelligence: Challenges and Solutions","authors":"D. Verma","doi":"10.1109/CogMI56440.2022.00014","DOIUrl":null,"url":null,"abstract":"In order to create AI/ML based solutions that will be trusted during production, issues that hamper usage of AI models in practical solutions needs to be addressed. Despite a significant interest in the area of AI/ML, the primary focus of the research community has been on the training of AI models, including their performance, trustworthiness, explainability and scalability. Training, however, is only one half of the work required to create an AI-based solution. The other half, using the trained model for inference during operations, is mistakenly considered a relatively mundane task. As a result, challenges arising in model inference time has received comparatively scant attention. Inference is when AI model is put into practice, resulting in many challenges that are worth the attention of the research community. Despite the existence of several pre-trained models on many Internet sites, anyone trying to build an AI/ML based solution would be hard-pressed to find a model that is useful, trustworthy and reliable, or suitable for the task. Even when a custom model is trained, the solution often falters because the use of model fails to account for the differences in the training and inference environment. In this paper, we identify those challenges and discuss how we can design a generic inference server for trustworthy AI/ML based solutions.","PeriodicalId":211430,"journal":{"name":"2022 IEEE 4th International Conference on Cognitive Machine Intelligence (CogMI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 4th International Conference on Cognitive Machine Intelligence (CogMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CogMI56440.2022.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In order to create AI/ML based solutions that will be trusted during production, issues that hamper usage of AI models in practical solutions needs to be addressed. Despite a significant interest in the area of AI/ML, the primary focus of the research community has been on the training of AI models, including their performance, trustworthiness, explainability and scalability. Training, however, is only one half of the work required to create an AI-based solution. The other half, using the trained model for inference during operations, is mistakenly considered a relatively mundane task. As a result, challenges arising in model inference time has received comparatively scant attention. Inference is when AI model is put into practice, resulting in many challenges that are worth the attention of the research community. Despite the existence of several pre-trained models on many Internet sites, anyone trying to build an AI/ML based solution would be hard-pressed to find a model that is useful, trustworthy and reliable, or suitable for the task. Even when a custom model is trained, the solution often falters because the use of model fails to account for the differences in the training and inference environment. In this paper, we identify those challenges and discuss how we can design a generic inference server for trustworthy AI/ML based solutions.
可信机器智能的推理:挑战与解决方案
为了创建在生产过程中受信任的基于AI/ML的解决方案,需要解决阻碍AI模型在实际解决方案中使用的问题。尽管对人工智能/机器学习领域有很大的兴趣,但研究界的主要焦点一直放在人工智能模型的训练上,包括它们的性能、可信度、可解释性和可扩展性。然而,培训只是创建基于人工智能的解决方案所需工作的一半。另一半,在操作过程中使用训练好的模型进行推理,被错误地认为是一项相对平凡的任务。因此,模型推理时间方面的挑战相对较少受到关注。推理是人工智能模型付诸实践的过程,产生了许多值得研究界关注的挑战。尽管在许多互联网站点上存在几个预训练模型,但任何试图构建基于AI/ML的解决方案的人都很难找到一个有用的、值得信赖的、可靠的或适合该任务的模型。即使训练了自定义模型,解决方案也经常会出现问题,因为模型的使用无法解释训练和推理环境中的差异。在本文中,我们确定了这些挑战,并讨论了如何为可信的基于AI/ML的解决方案设计通用推理服务器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信