在自主人机团队中实现信任

Ming Hou
{"title":"在自主人机团队中实现信任","authors":"Ming Hou","doi":"10.1109/ICAS49788.2021.9551153","DOIUrl":null,"url":null,"abstract":"The advancement of AI enables the evolution of machines from relatively simple automation to completely autonomous systems that augment human capabilities with improved quality and productivity in work and life. The singularity is near! However, humans are still vulnerable. The COVID-19 pandemic reminds us of our limited knowledge about nature. The recent accidents involving Boeing 737 Max passengers ring the alarm again about the potential risks when using human-autonomy symbiosis technologies. A key challenge of safe and effective human-autonomy teaming is enabling “trust” between the human-machine team. It is even more challenging when we are facing insufficient data, incomplete information, indeterministic conditions, and inexhaustive solutions for uncertain actions. This calls for the imperative needs of appropriate design guidance and scientific methodologies for developing safety-critical autonomous systems and AI functions. The question is how to build and maintain a safe, effective, and trusted partnership between humans and autonomous systems. This talk discusses a context-based and interaction-centred design (ICD) approach for developing a safe and collaborative partnership between humans and technology by optimizing the interaction between human intelligence and AI. An associated trust model IMPACTS (Intention, Measurability, Performance, Adaptivity, Communications, Transparency, and Security) will also be introduced to enable the practitioners to foster an assured and calibrated trust relationship between humans and their partner autonomous systems. A real-world example of human-autonomy teaming in a military context will be explained to illustrate the utility and effectiveness of these trust enablers.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Enabling Trust in Autonomous Human-Machine Teaming\",\"authors\":\"Ming Hou\",\"doi\":\"10.1109/ICAS49788.2021.9551153\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The advancement of AI enables the evolution of machines from relatively simple automation to completely autonomous systems that augment human capabilities with improved quality and productivity in work and life. The singularity is near! However, humans are still vulnerable. The COVID-19 pandemic reminds us of our limited knowledge about nature. The recent accidents involving Boeing 737 Max passengers ring the alarm again about the potential risks when using human-autonomy symbiosis technologies. A key challenge of safe and effective human-autonomy teaming is enabling “trust” between the human-machine team. It is even more challenging when we are facing insufficient data, incomplete information, indeterministic conditions, and inexhaustive solutions for uncertain actions. This calls for the imperative needs of appropriate design guidance and scientific methodologies for developing safety-critical autonomous systems and AI functions. The question is how to build and maintain a safe, effective, and trusted partnership between humans and autonomous systems. This talk discusses a context-based and interaction-centred design (ICD) approach for developing a safe and collaborative partnership between humans and technology by optimizing the interaction between human intelligence and AI. An associated trust model IMPACTS (Intention, Measurability, Performance, Adaptivity, Communications, Transparency, and Security) will also be introduced to enable the practitioners to foster an assured and calibrated trust relationship between humans and their partner autonomous systems. A real-world example of human-autonomy teaming in a military context will be explained to illustrate the utility and effectiveness of these trust enablers.\",\"PeriodicalId\":287105,\"journal\":{\"name\":\"2021 IEEE International Conference on Autonomous Systems (ICAS)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Autonomous Systems (ICAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAS49788.2021.9551153\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Autonomous Systems (ICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAS49788.2021.9551153","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

人工智能的进步使机器从相对简单的自动化进化为完全自主的系统,通过提高工作和生活的质量和生产力来增强人类的能力。奇点近了!然而,人类仍然很脆弱。2019冠状病毒病大流行提醒我们,我们对自然的了解有限。最近涉及波音737 Max乘客的事故再次敲响了使用人类自主共生技术的潜在风险的警钟。安全有效的人机自主团队的一个关键挑战是实现人机团队之间的“信任”。当我们面对不充分的数据、不完整的信息、不确定的条件和不确定的行动的不详尽的解决方案时,它甚至更具挑战性。这就迫切需要适当的设计指导和科学的方法来开发安全关键的自主系统和人工智能功能。问题是如何在人类和自主系统之间建立和维持一种安全、有效和可信的伙伴关系。本次演讲讨论了一种基于情境和以交互为中心的设计(ICD)方法,通过优化人类智能和人工智能之间的交互,在人类和技术之间建立安全和协作的伙伴关系。还将引入相关的信任模型impact(意图、可测量性、性能、适应性、通信、透明度和安全性),以使从业者能够在人类与其合作伙伴自治系统之间建立可靠和校准的信任关系。本文将解释一个军事环境中人类自主团队的实际示例,以说明这些信任促成因素的效用和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enabling Trust in Autonomous Human-Machine Teaming
The advancement of AI enables the evolution of machines from relatively simple automation to completely autonomous systems that augment human capabilities with improved quality and productivity in work and life. The singularity is near! However, humans are still vulnerable. The COVID-19 pandemic reminds us of our limited knowledge about nature. The recent accidents involving Boeing 737 Max passengers ring the alarm again about the potential risks when using human-autonomy symbiosis technologies. A key challenge of safe and effective human-autonomy teaming is enabling “trust” between the human-machine team. It is even more challenging when we are facing insufficient data, incomplete information, indeterministic conditions, and inexhaustive solutions for uncertain actions. This calls for the imperative needs of appropriate design guidance and scientific methodologies for developing safety-critical autonomous systems and AI functions. The question is how to build and maintain a safe, effective, and trusted partnership between humans and autonomous systems. This talk discusses a context-based and interaction-centred design (ICD) approach for developing a safe and collaborative partnership between humans and technology by optimizing the interaction between human intelligence and AI. An associated trust model IMPACTS (Intention, Measurability, Performance, Adaptivity, Communications, Transparency, and Security) will also be introduced to enable the practitioners to foster an assured and calibrated trust relationship between humans and their partner autonomous systems. A real-world example of human-autonomy teaming in a military context will be explained to illustrate the utility and effectiveness of these trust enablers.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信