{"title":"警告人们人工智能出错的风险,可以减轻人类对人工智能的偏见。","authors":"Lucía Vicente, Helena Matute","doi":"10.1186/s41235-026-00726-w","DOIUrl":null,"url":null,"abstract":"<p><p>Empirical evidence has demonstrated the power of AI to influence human decisions and the risk of humans acquiring AI biases. Therefore, there is a clear need to develop strategies to mitigate such threat. In three experiments, set in a medical context, we tested whether warning individuals about AI biases and errors could mitigate the negative impact of AI biases on their decisions and reduce the transmission of AI biases to humans. In Experiment 1, participants received explicit information about the percentage of erroneous AI recommendations but with two different framings: in terms of AI accuracy or AI risk of error. Our results showed that emphasising the risk of AI errors, more than its accuracy, reduced people's tendency to follow incorrect AI suggestions and to acquire biases from AI. In Experiment 2, a more general warning message alerting of possible AI errors and biases was also effective in reducing bias acquisition. Experiment 3 showed that, although the warning message provided some protection against bias, participants who received AI support still made more errors than participants who completed the classification task without any assistance. Experiments 2 and 3 also investigated whether the type of error made by the AI, a false positive or a false negative, influenced participants' tendency to adhere to its suggestions, and the effect of the warning message. However, no significant effects were found. Overall, our results highlight the importance of informing users about the risk of AI error rather than focusing solely on accuracy.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2026-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13133307/pdf/","citationCount":"0","resultStr":"{\"title\":\"Warning people about the risk of AI error mitigates human acquisition of AI bias.\",\"authors\":\"Lucía Vicente, Helena Matute\",\"doi\":\"10.1186/s41235-026-00726-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Empirical evidence has demonstrated the power of AI to influence human decisions and the risk of humans acquiring AI biases. Therefore, there is a clear need to develop strategies to mitigate such threat. In three experiments, set in a medical context, we tested whether warning individuals about AI biases and errors could mitigate the negative impact of AI biases on their decisions and reduce the transmission of AI biases to humans. In Experiment 1, participants received explicit information about the percentage of erroneous AI recommendations but with two different framings: in terms of AI accuracy or AI risk of error. Our results showed that emphasising the risk of AI errors, more than its accuracy, reduced people's tendency to follow incorrect AI suggestions and to acquire biases from AI. In Experiment 2, a more general warning message alerting of possible AI errors and biases was also effective in reducing bias acquisition. Experiment 3 showed that, although the warning message provided some protection against bias, participants who received AI support still made more errors than participants who completed the classification task without any assistance. Experiments 2 and 3 also investigated whether the type of error made by the AI, a false positive or a false negative, influenced participants' tendency to adhere to its suggestions, and the effect of the warning message. However, no significant effects were found. Overall, our results highlight the importance of informing users about the risk of AI error rather than focusing solely on accuracy.</p>\",\"PeriodicalId\":46827,\"journal\":{\"name\":\"Cognitive Research-Principles and Implications\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2026-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13133307/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Research-Principles and Implications\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1186/s41235-026-00726-w\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Research-Principles and Implications","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1186/s41235-026-00726-w","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
Warning people about the risk of AI error mitigates human acquisition of AI bias.
Empirical evidence has demonstrated the power of AI to influence human decisions and the risk of humans acquiring AI biases. Therefore, there is a clear need to develop strategies to mitigate such threat. In three experiments, set in a medical context, we tested whether warning individuals about AI biases and errors could mitigate the negative impact of AI biases on their decisions and reduce the transmission of AI biases to humans. In Experiment 1, participants received explicit information about the percentage of erroneous AI recommendations but with two different framings: in terms of AI accuracy or AI risk of error. Our results showed that emphasising the risk of AI errors, more than its accuracy, reduced people's tendency to follow incorrect AI suggestions and to acquire biases from AI. In Experiment 2, a more general warning message alerting of possible AI errors and biases was also effective in reducing bias acquisition. Experiment 3 showed that, although the warning message provided some protection against bias, participants who received AI support still made more errors than participants who completed the classification task without any assistance. Experiments 2 and 3 also investigated whether the type of error made by the AI, a false positive or a false negative, influenced participants' tendency to adhere to its suggestions, and the effect of the warning message. However, no significant effects were found. Overall, our results highlight the importance of informing users about the risk of AI error rather than focusing solely on accuracy.