可信人工智能框架的空间应用:经验和教训

L. Mandrake, G. Doran, Ashish Goel, H. Ono, R. Amini, M. Feather, L. Fesq, Philip C. Slingerland, Lauren Perry, Benjamen Bycroft, James Kaufman
{"title":"可信人工智能框架的空间应用:经验和教训","authors":"L. Mandrake, G. Doran, Ashish Goel, H. Ono, R. Amini, M. Feather, L. Fesq, Philip C. Slingerland, Lauren Perry, Benjamen Bycroft, James Kaufman","doi":"10.1109/AERO53065.2022.9843322","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI), which encompasses machine learning (ML), has become a critical technology due to its well-established success in a wide array of applications. However, the proper application of AI remains a central topic of discussion in many safety-critical fields. This has limited its success in autonomous systems due to the difficulty of ensuring AI algorithms will perform as desired and that users will understand and trust how they operate. In response, there is growing demand for trustability in AI to address both the expectations and concerns regarding its use. The Aerospace Corporation (Aerospace) developed a Framework for Trusted AI (henceforth referred to as the framework) to encourage best practices for the implementation, assessment, and control of AI-based applications. It is generally applicable, being based on terms and definitions that cut across AI domains, and thus is a starting point for practitioners to tailor to their particular application. To help demonstrate how the framework can be tailored into mission assurance guidance for the space domain, Aerospace sought the involvement of the Jet Propulsion Laboratory (JPL) to engage with actual examples of AI-based space autonomy. We report here on the framework's application to two JPL projects. The first, Machine learning-based Analytics for Automated Rover Systems (MAARS), is a suite of algorithms that is intended to run onboard a rover to enhance its safety and productivity. The second, the Ocean Worlds Life Surveyor (OWLS), is comprised of an instrument suite and onboard software that is designed to search for life on an icy moon using microscopy and mass spectrometry while judiciously summarizing and prioritizing science data for downlink. Both MAARS and OWLS are intended to have minimal manual control while relying on complex autonomy software to operate within the unforgiving environment of deep space. Therefore, trusted AI for these systems is required for successful adoption of the autonomy software. To capture the needs for trust, interviews with a variety of JPL personnel responsible for developing autonomy solutions were conducted and are summarized here. Additionally, the application of the framework is presented as a means to lower the barrier for AI deployment. The intent of this document is to encourage researchers, engineers, and program managers to adopt new strategies when considering whether to leverage AI in autonomous systems.","PeriodicalId":219988,"journal":{"name":"2022 IEEE Aerospace Conference (AERO)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Space Applications of a Trusted AI Framework: Experiences and Lessons Learned\",\"authors\":\"L. Mandrake, G. Doran, Ashish Goel, H. Ono, R. Amini, M. Feather, L. Fesq, Philip C. Slingerland, Lauren Perry, Benjamen Bycroft, James Kaufman\",\"doi\":\"10.1109/AERO53065.2022.9843322\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI), which encompasses machine learning (ML), has become a critical technology due to its well-established success in a wide array of applications. However, the proper application of AI remains a central topic of discussion in many safety-critical fields. This has limited its success in autonomous systems due to the difficulty of ensuring AI algorithms will perform as desired and that users will understand and trust how they operate. In response, there is growing demand for trustability in AI to address both the expectations and concerns regarding its use. The Aerospace Corporation (Aerospace) developed a Framework for Trusted AI (henceforth referred to as the framework) to encourage best practices for the implementation, assessment, and control of AI-based applications. It is generally applicable, being based on terms and definitions that cut across AI domains, and thus is a starting point for practitioners to tailor to their particular application. To help demonstrate how the framework can be tailored into mission assurance guidance for the space domain, Aerospace sought the involvement of the Jet Propulsion Laboratory (JPL) to engage with actual examples of AI-based space autonomy. We report here on the framework's application to two JPL projects. The first, Machine learning-based Analytics for Automated Rover Systems (MAARS), is a suite of algorithms that is intended to run onboard a rover to enhance its safety and productivity. The second, the Ocean Worlds Life Surveyor (OWLS), is comprised of an instrument suite and onboard software that is designed to search for life on an icy moon using microscopy and mass spectrometry while judiciously summarizing and prioritizing science data for downlink. Both MAARS and OWLS are intended to have minimal manual control while relying on complex autonomy software to operate within the unforgiving environment of deep space. Therefore, trusted AI for these systems is required for successful adoption of the autonomy software. To capture the needs for trust, interviews with a variety of JPL personnel responsible for developing autonomy solutions were conducted and are summarized here. Additionally, the application of the framework is presented as a means to lower the barrier for AI deployment. The intent of this document is to encourage researchers, engineers, and program managers to adopt new strategies when considering whether to leverage AI in autonomous systems.\",\"PeriodicalId\":219988,\"journal\":{\"name\":\"2022 IEEE Aerospace Conference (AERO)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Aerospace Conference (AERO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AERO53065.2022.9843322\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Aerospace Conference (AERO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AERO53065.2022.9843322","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

人工智能(AI),包括机器学习(ML),由于在广泛的应用中取得了成功,已经成为一项关键技术。然而,人工智能的正确应用仍然是许多安全关键领域讨论的中心话题。这限制了它在自主系统中的成功,因为很难确保人工智能算法按预期运行,并且用户会理解和信任它们的操作方式。作为回应,人们越来越需要人工智能的可靠性,以解决对其使用的期望和担忧。航空航天公司(Aerospace)开发了一个可信人工智能框架(以下简称框架),以鼓励实现、评估和控制基于人工智能的应用程序的最佳实践。它是普遍适用的,基于跨越AI领域的术语和定义,因此是从业者针对其特定应用进行定制的起点。为了帮助演示如何将该框架定制为空间领域的任务保证指导,航空航天公司寻求喷气推进实验室(JPL)的参与,以参与基于人工智能的空间自治的实际示例。我们在这里报告框架在两个JPL项目中的应用。第一个是基于机器学习的自动漫游车系统分析(MAARS),这是一套算法,旨在在漫游车上运行,以提高其安全性和生产力。第二个是海洋世界生命测量员(owl),由一套仪器和机载软件组成,旨在通过显微镜和质谱法在冰冷的卫星上寻找生命,同时明智地总结和优先考虑下行的科学数据。MAARS和owl都旨在减少人工控制,同时依靠复杂的自主软件在恶劣的深空环境中运行。因此,为了成功采用自治软件,这些系统需要可信的AI。为了获得信任的需求,我们对负责开发自主解决方案的JPL人员进行了采访,并在这里进行了总结。此外,该框架的应用是降低人工智能部署障碍的一种手段。本文件的目的是鼓励研究人员、工程师和项目经理在考虑是否在自主系统中利用人工智能时采用新的策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Space Applications of a Trusted AI Framework: Experiences and Lessons Learned
Artificial intelligence (AI), which encompasses machine learning (ML), has become a critical technology due to its well-established success in a wide array of applications. However, the proper application of AI remains a central topic of discussion in many safety-critical fields. This has limited its success in autonomous systems due to the difficulty of ensuring AI algorithms will perform as desired and that users will understand and trust how they operate. In response, there is growing demand for trustability in AI to address both the expectations and concerns regarding its use. The Aerospace Corporation (Aerospace) developed a Framework for Trusted AI (henceforth referred to as the framework) to encourage best practices for the implementation, assessment, and control of AI-based applications. It is generally applicable, being based on terms and definitions that cut across AI domains, and thus is a starting point for practitioners to tailor to their particular application. To help demonstrate how the framework can be tailored into mission assurance guidance for the space domain, Aerospace sought the involvement of the Jet Propulsion Laboratory (JPL) to engage with actual examples of AI-based space autonomy. We report here on the framework's application to two JPL projects. The first, Machine learning-based Analytics for Automated Rover Systems (MAARS), is a suite of algorithms that is intended to run onboard a rover to enhance its safety and productivity. The second, the Ocean Worlds Life Surveyor (OWLS), is comprised of an instrument suite and onboard software that is designed to search for life on an icy moon using microscopy and mass spectrometry while judiciously summarizing and prioritizing science data for downlink. Both MAARS and OWLS are intended to have minimal manual control while relying on complex autonomy software to operate within the unforgiving environment of deep space. Therefore, trusted AI for these systems is required for successful adoption of the autonomy software. To capture the needs for trust, interviews with a variety of JPL personnel responsible for developing autonomy solutions were conducted and are summarized here. Additionally, the application of the framework is presented as a means to lower the barrier for AI deployment. The intent of this document is to encourage researchers, engineers, and program managers to adopt new strategies when considering whether to leverage AI in autonomous systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信