人类系统交互的可信赖AI开发指南

Chathurika S. Wickramasinghe, Daniel L. Marino, J. Grandio, M. Manic
{"title":"人类系统交互的可信赖AI开发指南","authors":"Chathurika S. Wickramasinghe, Daniel L. Marino, J. Grandio, M. Manic","doi":"10.1109/HSI49210.2020.9142644","DOIUrl":null,"url":null,"abstract":"Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.","PeriodicalId":371828,"journal":{"name":"2020 13th International Conference on Human System Interaction (HSI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"24","resultStr":"{\"title\":\"Trustworthy AI Development Guidelines for Human System Interaction\",\"authors\":\"Chathurika S. Wickramasinghe, Daniel L. Marino, J. Grandio, M. Manic\",\"doi\":\"10.1109/HSI49210.2020.9142644\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.\",\"PeriodicalId\":371828,\"journal\":{\"name\":\"2020 13th International Conference on Human System Interaction (HSI)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"24\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 13th International Conference on Human System Interaction (HSI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HSI49210.2020.9142644\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 13th International Conference on Human System Interaction (HSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HSI49210.2020.9142644","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 24

摘要

人工智能(AI)正在影响人类生活的几乎所有领域。尽管这些基于人工智能的系统经常提供最先进的性能,但人类仍然对开发、部署和使用人工智能系统犹豫不决。造成这种情况的主要原因是由于现有AI系统缺乏透明度而导致对AI系统缺乏信任。作为解决方案,“值得信赖的人工智能”研究领域与定义指导方针和框架的目标相结合,以提高用户对人工智能系统的信任,使人类能够毫无畏惧地使用它们。虽然对人工智能的信任是一个活跃的研究领域,但很少有工作关注于建立人类信任,以改善人类与人工智能系统之间的互动。在本文中,我们提供了一个关于可信人工智能概念的简明调查。此外,我们提出了值得信赖的人工智能开发指南,用于提高用户信任,以增强人工智能系统与人类之间的交互,这些交互发生在人工智能系统生命周期中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Trustworthy AI Development Guidelines for Human System Interaction
Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信