An extended review on cyber vulnerabilities of AI technologies in space applications: Technological challenges and international governance of AI

IF 1 Q3 ENGINEERING, AEROSPACE
Paola Breda , Rada Markova , Adam F. Abdin , Nebile Pelin Mantı , Antonio Carlo , Devanshu Jha
{"title":"An extended review on cyber vulnerabilities of AI technologies in space applications: Technological challenges and international governance of AI","authors":"Paola Breda ,&nbsp;Rada Markova ,&nbsp;Adam F. Abdin ,&nbsp;Nebile Pelin Mantı ,&nbsp;Antonio Carlo ,&nbsp;Devanshu Jha","doi":"10.1016/j.jsse.2023.08.003","DOIUrl":null,"url":null,"abstract":"<div><p><span><span>The aerospace community and industry<span> have recently shown increasing interest towards the use of Artificial Intelligence (AI) for space applications, partially driven by the recent development of the NewSpace economy. AI has already come into extensive use in spacecraft operations, for example to support efficient operations of satellite constellations<span> and system health management. However, since most critical infrastructures rely on space systems, the use of new technologies, such as AI algorithms or increased system </span></span></span>autonomy<span> on-board, introduces further vulnerabilities on the system level. As a matter of fact, AI cyber security<span> is becoming an important aspect to ensure space safety and operational security. Apart from identifying new vulnerabilities that AI systems may introduce to space assets, this paper seeks for safety guidelines and technical standardisations developed for terrestrial applications<span> that can be applicable to AI systems in space. Existing policy guidance for cybersecurity and AI, especially for the European context, is discussed. To promote the safe use of AI technologies in space this work underlines the urgency for policymakers, governance, and technical institutions to initiate or further support the development of a suitable framework to address the new cyber-vulnerabilities introduced by AI technologies when applied to space systems. The paper suggests a regulatory approach based on technical standardisation in the field of AI, which is built upon a </span></span></span></span>multidisciplinary research of AI applications in non-space sectors where the level of autonomy is more advanced.</p></div>","PeriodicalId":37283,"journal":{"name":"Journal of Space Safety Engineering","volume":"10 4","pages":"Pages 447-458"},"PeriodicalIF":1.0000,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Space Safety Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S246889672300068X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, AEROSPACE","Score":null,"Total":0}
引用次数: 0

Abstract

The aerospace community and industry have recently shown increasing interest towards the use of Artificial Intelligence (AI) for space applications, partially driven by the recent development of the NewSpace economy. AI has already come into extensive use in spacecraft operations, for example to support efficient operations of satellite constellations and system health management. However, since most critical infrastructures rely on space systems, the use of new technologies, such as AI algorithms or increased system autonomy on-board, introduces further vulnerabilities on the system level. As a matter of fact, AI cyber security is becoming an important aspect to ensure space safety and operational security. Apart from identifying new vulnerabilities that AI systems may introduce to space assets, this paper seeks for safety guidelines and technical standardisations developed for terrestrial applications that can be applicable to AI systems in space. Existing policy guidance for cybersecurity and AI, especially for the European context, is discussed. To promote the safe use of AI technologies in space this work underlines the urgency for policymakers, governance, and technical institutions to initiate or further support the development of a suitable framework to address the new cyber-vulnerabilities introduced by AI technologies when applied to space systems. The paper suggests a regulatory approach based on technical standardisation in the field of AI, which is built upon a multidisciplinary research of AI applications in non-space sectors where the level of autonomy is more advanced.

空间应用中人工智能技术的网络漏洞:技术挑战与人工智能的国际治理
航空航天界和工业界最近对将人工智能(AI)用于空间应用表现出越来越大的兴趣,部分原因是最近新空间经济的发展。人工智能已经在航天器操作中得到广泛应用,例如支持卫星星座的有效运行和系统健康管理。然而,由于大多数关键基础设施依赖于空间系统,因此使用新技术(如人工智能算法或机载系统自主性增强)会在系统层面引入进一步的漏洞。事实上,人工智能网络安全正在成为保障空间安全和运行安全的重要方面。除了确定人工智能系统可能给空间资产带来的新漏洞外,本文还寻求为可适用于空间人工智能系统的地面应用开发的安全指南和技术标准化。讨论了网络安全和人工智能的现有政策指导,特别是在欧洲背景下。为了促进人工智能技术在空间中的安全使用,这项工作强调了政策制定者、治理机构和技术机构启动或进一步支持制定适当框架的紧迫性,以解决人工智能技术应用于空间系统时带来的新网络漏洞。该论文提出了一种基于人工智能领域技术标准化的监管方法,该方法建立在对人工智能在自主水平更高的非空间部门应用的多学科研究的基础上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Space Safety Engineering
Journal of Space Safety Engineering Engineering-Safety, Risk, Reliability and Quality
CiteScore
2.50
自引率
0.00%
发文量
80
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信