Gabriel Diniz Junqueira Barbosa, Dalai dos Santos Ribeiro, Marisa do Carmo Silva, Hélio Lopes, Simone Diniz Junqueira Barbosa
{"title":"Investigating the relationships between class probabilities and users’ appropriate trust in computer vision classifications of ambiguous images","authors":"Gabriel Diniz Junqueira Barbosa, Dalai dos Santos Ribeiro, Marisa do Carmo Silva, Hélio Lopes, Simone Diniz Junqueira Barbosa","doi":"10.1016/j.cola.2022.101149","DOIUrl":null,"url":null,"abstract":"<div><p>The large-scale adoption of systems that automate classifications using Machine Learning (ML) algorithms raises pressing challenges as they support or make decisions with profound consequences for human beings. It is important to understand how users’ trust is affected by ML<span> models’ suggestions, even when those models are wrong. Many research efforts have focused on the user’s ability to interpret what a model has learned. In this paper, we seek to understand another aspect of ML interpretability<span>: whether and how the presence of classification probabilities and their different distributions are related to users’ trust in model outcomes, especially in ambiguous instances. To this end, we conducted two online surveys in which we asked participants to evaluate their agreement with image classifications<span> of pictures of animals made by an ML model. In the first, we analyze their trust before and after presenting them the model classification probabilities. In the second, we investigate the relationships between class probability distributions and users’ trust in the model. We found that, in some cases, the additional information is correlated with undue trust in the model’s classifications. However, in others, they are associated with inappropriate skepticism.</span></span></span></p></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"72 ","pages":"Article 101149"},"PeriodicalIF":1.7000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computer Languages","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590118422000478","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
The large-scale adoption of systems that automate classifications using Machine Learning (ML) algorithms raises pressing challenges as they support or make decisions with profound consequences for human beings. It is important to understand how users’ trust is affected by ML models’ suggestions, even when those models are wrong. Many research efforts have focused on the user’s ability to interpret what a model has learned. In this paper, we seek to understand another aspect of ML interpretability: whether and how the presence of classification probabilities and their different distributions are related to users’ trust in model outcomes, especially in ambiguous instances. To this end, we conducted two online surveys in which we asked participants to evaluate their agreement with image classifications of pictures of animals made by an ML model. In the first, we analyze their trust before and after presenting them the model classification probabilities. In the second, we investigate the relationships between class probability distributions and users’ trust in the model. We found that, in some cases, the additional information is correlated with undue trust in the model’s classifications. However, in others, they are associated with inappropriate skepticism.