人工智能信任框架和成熟度模型:提高人工智能的安全、伦理和信任

Michael Mylrea, Nikki Robinson
{"title":"人工智能信任框架和成熟度模型:提高人工智能的安全、伦理和信任","authors":"Michael Mylrea, Nikki Robinson","doi":"10.53889/citj.v1i1.198","DOIUrl":null,"url":null,"abstract":"The following article develops an AI Trust Framework and Maturity Model (AI-TFMM) to improve trust in AI technologies used by Autonomous Human Machine Teams Systems (A-HMT-S). The framework establishes a methodology to improve quantification of trust in AI technologies. Key areas of exploration include security, privacy, explainability, transparency and other requirements for AI technologies to be ethical in their development and application. A maturity model framework approach to measuring trust is applied to improve gaps in quantifying trust and associated metrics of evaluation. Finding the right balance between performance, governance and ethics also raises several critical questions on AI technology and trust. Research examines methods needed to develop an AI-TFMM and validates it against a popular AI technology (Chat GPT). OpenAI's GPT, which stands for \"Generative Pre-training Transformer,\" is a deep learning language model that can generate human-like text by predicting the next word in a sequence based on a given prompt. ChatGPT is a version of GPT that is tailored for conversation and dialogue, and it has been trained on a dataset of human conversations to generate responses that are coherent and relevant to the context. The article concludes with results and conclusions from testing the AI Trust Framework and Maturity Model (AI-TFMM) applied to AI technology. Based on these findings, this paper highlights gaps that could be filled with future research to improve the accuracy, efficacy, application, and methodology of the AI-TFMM.","PeriodicalId":496546,"journal":{"name":"Cybersecurity and Innovative Technology Journal","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI Trust Framework and Maturity Model: Improving Security, Ethics and Trust in AI\",\"authors\":\"Michael Mylrea, Nikki Robinson\",\"doi\":\"10.53889/citj.v1i1.198\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The following article develops an AI Trust Framework and Maturity Model (AI-TFMM) to improve trust in AI technologies used by Autonomous Human Machine Teams Systems (A-HMT-S). The framework establishes a methodology to improve quantification of trust in AI technologies. Key areas of exploration include security, privacy, explainability, transparency and other requirements for AI technologies to be ethical in their development and application. A maturity model framework approach to measuring trust is applied to improve gaps in quantifying trust and associated metrics of evaluation. Finding the right balance between performance, governance and ethics also raises several critical questions on AI technology and trust. Research examines methods needed to develop an AI-TFMM and validates it against a popular AI technology (Chat GPT). OpenAI's GPT, which stands for \\\"Generative Pre-training Transformer,\\\" is a deep learning language model that can generate human-like text by predicting the next word in a sequence based on a given prompt. ChatGPT is a version of GPT that is tailored for conversation and dialogue, and it has been trained on a dataset of human conversations to generate responses that are coherent and relevant to the context. The article concludes with results and conclusions from testing the AI Trust Framework and Maturity Model (AI-TFMM) applied to AI technology. Based on these findings, this paper highlights gaps that could be filled with future research to improve the accuracy, efficacy, application, and methodology of the AI-TFMM.\",\"PeriodicalId\":496546,\"journal\":{\"name\":\"Cybersecurity and Innovative Technology Journal\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cybersecurity and Innovative Technology Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.53889/citj.v1i1.198\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cybersecurity and Innovative Technology Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.53889/citj.v1i1.198","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

下面的文章开发了一个人工智能信任框架和成熟度模型(AI- tfmm),以提高对自主人机团队系统(A-HMT-S)使用的人工智能技术的信任。该框架建立了一种方法来改进对人工智能技术的信任量化。探索的关键领域包括安全性、隐私性、可解释性、透明度以及人工智能技术在开发和应用中符合道德的其他要求。采用成熟度模型框架方法测量信任,以改善信任量化和相关评价指标的差距。在绩效、治理和道德之间找到适当的平衡,也引发了关于人工智能技术和信任的几个关键问题。研究考察了开发AI- tfmm所需的方法,并对流行的AI技术(Chat GPT)进行了验证。OpenAI的GPT是“生成预训练转换器”(Generative Pre-training Transformer)的缩写,是一种深度学习语言模型,可以根据给定的提示预测序列中的下一个单词,从而生成类似人类的文本。ChatGPT是GPT的一个版本,它是为对话和对话量身定制的,它已经在人类对话的数据集上进行了训练,以生成连贯且与上下文相关的响应。本文最后给出了测试应用于人工智能技术的人工智能信任框架和成熟度模型(AI- tfmm)的结果和结论。基于这些发现,本文强调了未来研究可以填补的空白,以提高AI-TFMM的准确性、有效性、应用和方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AI Trust Framework and Maturity Model: Improving Security, Ethics and Trust in AI
The following article develops an AI Trust Framework and Maturity Model (AI-TFMM) to improve trust in AI technologies used by Autonomous Human Machine Teams Systems (A-HMT-S). The framework establishes a methodology to improve quantification of trust in AI technologies. Key areas of exploration include security, privacy, explainability, transparency and other requirements for AI technologies to be ethical in their development and application. A maturity model framework approach to measuring trust is applied to improve gaps in quantifying trust and associated metrics of evaluation. Finding the right balance between performance, governance and ethics also raises several critical questions on AI technology and trust. Research examines methods needed to develop an AI-TFMM and validates it against a popular AI technology (Chat GPT). OpenAI's GPT, which stands for "Generative Pre-training Transformer," is a deep learning language model that can generate human-like text by predicting the next word in a sequence based on a given prompt. ChatGPT is a version of GPT that is tailored for conversation and dialogue, and it has been trained on a dataset of human conversations to generate responses that are coherent and relevant to the context. The article concludes with results and conclusions from testing the AI Trust Framework and Maturity Model (AI-TFMM) applied to AI technology. Based on these findings, this paper highlights gaps that could be filled with future research to improve the accuracy, efficacy, application, and methodology of the AI-TFMM.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信