解释人工智能有助于将人工智能和人类智能集成到医疗诊断系统中:出版物的分析评论

M. Farkhadov, Aleksander Eliseev, N. Petukhova
{"title":"解释人工智能有助于将人工智能和人类智能集成到医疗诊断系统中:出版物的分析评论","authors":"M. Farkhadov, Aleksander Eliseev, N. Petukhova","doi":"10.1109/AICT50176.2020.9368576","DOIUrl":null,"url":null,"abstract":"Artificial intelligence-based medical systems can by now diagnose various disorders highly accurately. However, we should stress that despite encouraging and ever improving results, people still distrust such systems. We review relevant publications over the past five years, to identify the main causes of such mistrust and ways to overcome it. Our study showes that the main reasons to distrust these systems are opaque models, blackbox algorithms, and potentially unrepresentful training samples. We demonstrate that explainable artificial intelligence, aimed to create more user-friendly and understandable systems, has become a noticeable new topic in theoretical research and practical development. Another notable trend is to develop approaches to build hybrid systems, where artificial and human intelligence interact according to the teamwork model.","PeriodicalId":136491,"journal":{"name":"2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Explained Artificial Intelligence Helps to Integrate Artificial and Human Intelligence Into Medical Diagnostic Systems: Analytical Review of Publications\",\"authors\":\"M. Farkhadov, Aleksander Eliseev, N. Petukhova\",\"doi\":\"10.1109/AICT50176.2020.9368576\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence-based medical systems can by now diagnose various disorders highly accurately. However, we should stress that despite encouraging and ever improving results, people still distrust such systems. We review relevant publications over the past five years, to identify the main causes of such mistrust and ways to overcome it. Our study showes that the main reasons to distrust these systems are opaque models, blackbox algorithms, and potentially unrepresentful training samples. We demonstrate that explainable artificial intelligence, aimed to create more user-friendly and understandable systems, has become a noticeable new topic in theoretical research and practical development. Another notable trend is to develop approaches to build hybrid systems, where artificial and human intelligence interact according to the teamwork model.\",\"PeriodicalId\":136491,\"journal\":{\"name\":\"2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICT50176.2020.9368576\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICT50176.2020.9368576","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

目前,基于人工智能的医疗系统可以非常准确地诊断各种疾病。然而,我们应该强调的是,尽管令人鼓舞和不断改善的结果,人们仍然不信任这样的系统。我们审查了过去五年的有关出版物,以确定这种不信任的主要原因和克服这种不信任的方法。我们的研究表明,不信任这些系统的主要原因是不透明的模型、黑箱算法和潜在的不具代表性的训练样本。我们证明,可解释的人工智能,旨在创造更多的用户友好和可理解的系统,已经成为一个值得注意的理论研究和实践发展的新课题。另一个值得注意的趋势是开发构建混合系统的方法,其中人工智能和人类智能根据团队合作模型进行交互。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Explained Artificial Intelligence Helps to Integrate Artificial and Human Intelligence Into Medical Diagnostic Systems: Analytical Review of Publications
Artificial intelligence-based medical systems can by now diagnose various disorders highly accurately. However, we should stress that despite encouraging and ever improving results, people still distrust such systems. We review relevant publications over the past five years, to identify the main causes of such mistrust and ways to overcome it. Our study showes that the main reasons to distrust these systems are opaque models, blackbox algorithms, and potentially unrepresentful training samples. We demonstrate that explainable artificial intelligence, aimed to create more user-friendly and understandable systems, has become a noticeable new topic in theoretical research and practical development. Another notable trend is to develop approaches to build hybrid systems, where artificial and human intelligence interact according to the teamwork model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信