Rebecca L Pharmer, Christopher D Wickens, Benjamin A Clegg
{"title":"Transparent systems, opaque results: a study on automation compliance and task performance.","authors":"Rebecca L Pharmer, Christopher D Wickens, Benjamin A Clegg","doi":"10.1186/s41235-025-00619-4","DOIUrl":null,"url":null,"abstract":"<p><p>In two experiments, we examine how features of an imperfect automated decision aid influence compliance with the aid in a simplified, simulated nautical collision avoidance task. Experiment 1 examined the impact of providing transparency in the pre-task instructions regarding which attributes of the task that the aid uses to provide its recommendations. Results showed that transparency here positively influenced compliance with the aid, leading to better task performance. Experiment 2 manipulated transparency via confidence estimates presented alongside the aid's recommendations. There were no benefits from this form of transparency. In Experiment 2, lower compliance with the aid's recommendations was found on more difficult collision problems, via a mediating loss of aid reliability and therefore trust. This runs contrary to the hypothesis that harder problems to solve ought to make participants more, rather than less dependent on the aid. Both experiments produced relatively low correlations between trust and compliance. The findings have important implications for the effectiveness of different kinds of transparency implementations, as well as providing a model/framework for understanding how generic factors such as automation reliability and problem difficulty influence both compliance and trust.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"8"},"PeriodicalIF":3.4000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11845646/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Research-Principles and Implications","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1186/s41235-025-00619-4","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
In two experiments, we examine how features of an imperfect automated decision aid influence compliance with the aid in a simplified, simulated nautical collision avoidance task. Experiment 1 examined the impact of providing transparency in the pre-task instructions regarding which attributes of the task that the aid uses to provide its recommendations. Results showed that transparency here positively influenced compliance with the aid, leading to better task performance. Experiment 2 manipulated transparency via confidence estimates presented alongside the aid's recommendations. There were no benefits from this form of transparency. In Experiment 2, lower compliance with the aid's recommendations was found on more difficult collision problems, via a mediating loss of aid reliability and therefore trust. This runs contrary to the hypothesis that harder problems to solve ought to make participants more, rather than less dependent on the aid. Both experiments produced relatively low correlations between trust and compliance. The findings have important implications for the effectiveness of different kinds of transparency implementations, as well as providing a model/framework for understanding how generic factors such as automation reliability and problem difficulty influence both compliance and trust.