Visual Analytics for Explainable and Trustworthy Artificial Intelligence.

IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Angelos Chatzimparmpas, Sumanta N Pattanaik
{"title":"Visual Analytics for Explainable and Trustworthy Artificial Intelligence.","authors":"Angelos Chatzimparmpas, Sumanta N Pattanaik","doi":"10.1109/MCG.2025.3533806","DOIUrl":null,"url":null,"abstract":"<p><p>Our society increasingly depends on intelligent systems to solve complex problems, ranging from recommender systems suggesting the next movie to watch to AI models assisting in medical diagnoses for hospitalized patients. With the iterative improvement of diagnostic accuracy and efficiency, AI holds significant potential to mitigate medical misdiagnoses by preventing numerous deaths and reducing an economic burden of approximately € 450 billion annually. However, a key obstacle to AI adoption lies in the lack of transparency, that is, many automated systems provide predictions without revealing the underlying processes. This opacity can hinder experts' ability to trust and rely on AI systems. Visual analytics (VA) provides a compelling solution by combining AI models with interactive visualizations. These specialized charts and graphs empower users to incorporate their domain expertise to refine and improve the models, bridging the gap between AI and human understanding. In this work, the author defines, categorizes, and explores how VA solutions can foster trust across the stages of a typical AI pipeline. The author proposes a design space for innovative visualizations and presents an overview of our previously developed VA dashboards, which support critical tasks within the various pipeline stages, including data processing, feature engineering, hyperparameter tuning, understanding, debugging, refining, and comparing models.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 2","pages":"100-111"},"PeriodicalIF":1.7000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Computer Graphics and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/MCG.2025.3533806","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Our society increasingly depends on intelligent systems to solve complex problems, ranging from recommender systems suggesting the next movie to watch to AI models assisting in medical diagnoses for hospitalized patients. With the iterative improvement of diagnostic accuracy and efficiency, AI holds significant potential to mitigate medical misdiagnoses by preventing numerous deaths and reducing an economic burden of approximately € 450 billion annually. However, a key obstacle to AI adoption lies in the lack of transparency, that is, many automated systems provide predictions without revealing the underlying processes. This opacity can hinder experts' ability to trust and rely on AI systems. Visual analytics (VA) provides a compelling solution by combining AI models with interactive visualizations. These specialized charts and graphs empower users to incorporate their domain expertise to refine and improve the models, bridging the gap between AI and human understanding. In this work, the author defines, categorizes, and explores how VA solutions can foster trust across the stages of a typical AI pipeline. The author proposes a design space for innovative visualizations and presents an overview of our previously developed VA dashboards, which support critical tasks within the various pipeline stages, including data processing, feature engineering, hyperparameter tuning, understanding, debugging, refining, and comparing models.

可解释和可信赖的人工智能的可视化分析。
我们的社会越来越依赖智能系统来解决复杂的问题,从推荐下一部电影的推荐系统到帮助住院病人进行医疗诊断的人工智能模型。随着诊断准确性和效率的不断提高,人工智能具有巨大的潜力,可以通过防止大量死亡和减少每年约4500亿欧元的经济负担来减轻医疗误诊。然而,人工智能应用的一个关键障碍在于缺乏透明度,也就是说,许多自动化系统提供的预测没有揭示潜在的过程。这种不透明性可能会阻碍专家信任和依赖人工智能系统的能力。可视化分析(VA)通过将人工智能模型与交互式可视化相结合,提供了一个引人注目的解决方案。这些专门的图表和图形使用户能够结合他们的领域专业知识来完善和改进模型,弥合人工智能和人类理解之间的差距。在这项工作中,作者定义、分类并探讨了VA解决方案如何在典型的人工智能管道的各个阶段促进信任。作者提出了一个创新可视化的设计空间,并概述了我们以前开发的VA仪表板,它支持各个管道阶段中的关键任务,包括数据处理、特征工程、超参数调优、理解、调试、精炼和比较模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Computer Graphics and Applications
IEEE Computer Graphics and Applications 工程技术-计算机:软件工程
CiteScore
3.20
自引率
5.60%
发文量
160
审稿时长
>12 weeks
期刊介绍: IEEE Computer Graphics and Applications (CG&A) bridges the theory and practice of computer graphics, visualization, virtual and augmented reality, and HCI. From specific algorithms to full system implementations, CG&A offers a unique combination of peer-reviewed feature articles and informal departments. Theme issues guest edited by leading researchers in their fields track the latest developments and trends in computer-generated graphical content, while tutorials and surveys provide a broad overview of interesting and timely topics. Regular departments further explore the core areas of graphics as well as extend into topics such as usability, education, history, and opinion. Each issue, the story of our cover focuses on creative applications of the technology by an artist or designer. Published six times a year, CG&A is indispensable reading for people working at the leading edge of computer-generated graphics technology and its applications in everything from business to the arts.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信